AI, Journals, and the Evolving Knowledge Divide
Many years ago I worked on a US National Science Foundation project to enhance researcher access in a developing country to scientific literature. I focused on academic libraries. Other team members focused on accessing online bibliographic databases via leased satellite channels.
For a young US researcher that project was a real eye opener given the wide information access we had long taken for granted in the West, even in those pre-internet times.
What made a lasting impression on me was coming face to face with the lengths some students and researchers had to go to get the information they needed for their research and study.
In one case I interviewed a local college professor who had a side business tutoring his students. He literally rented out photocopies of articles he had brought back from Western libraries to students who attended his evening seminars. Even there a distinction had developed between the haves and have nots: if you couldn’t afford the evening seminars and the rental price for reading a journal article photocopy, you were out of luck.
Reading Jeffrey Brainard’s article “Journal giant Elsevier unveiled an AI tool that scans millions of paywalled papers. Is it worth it?” in Feb. 20, 2026 Science, I was reminded of that long-ago project and how wide the gulf is for some in their quest to access scientific literature.
Brainard describes publishing giant Elsevier’s new tool LeapSpace, which uses a large language model (LLM) to scan a huge collection of paywalled papers to answer researcher questions. LeapSpace covers journals from Elsevier as well as those from Emerald, Institute of Physics, New England Journal of Medicine Group, and Sage Publications. The system doesn’t just provide answers; it also provides the sources for its answers so the researcher can evaluate sources and reliability.
Such AI based systems raise questions like the following:
Will this system replace or increase demand for original sources?
How will this impact what institutions and libraries already pay for journal subscriptions?
How will government regulations treat such access to articles based on publicly funded research?
What happens to knowledge published in the many open access journals not covered by the Elsevier product?
Will this service disadvantage researchers who are not associated with a journal-subscribing institution?
Such questions got me thinking again of that professor long ago who rented access to his photocopy collection. Based on my own experience using basic AI tools like ChatGPT for both research and analysis tasks, there is no question in my mind that, when properly managed, tools like LeapSpace can be incredibly powerful productivity boosters—for those who can effectively use them.
Addressing the above questions requires consideration of economics, ownership, intellectual property management, and politics. As suggested by Korov and Oreskes in “Lineage of Science in a Warming World: Who Owns Climate Knowledge,” such issues eventually raise the prospect of government regulation::
“In the United States, any regulation of data and models will require us to move politically uphill. But that makes it more important to be precise about what is happening. This is not just ‘data privatization’ in some abstract sense. It is the privatization of knowledge: who gets to know in detail how risks are distributed; who gets to contest those assessments; who gets to change them.”
Copyright (c) 2026 by Dennis D. McDonald


