Paper Trail: Rebuilding Trust in Science

February 6, 2026
by Kate Becker
Paper Trail: Rebuilding Trust in Science
Science policy expert Liz Allen discusses how innovations in AI, publishing, and crediting authors could shape the future of research.
by Kate Becker
February 6, 2026
Once upon a time, there was a library. By day it looked quite ordinary. But at night, after all the patrons had gone home and the librarians had shut off the lights and locked all the doors, something magical happened: fluttering open their pages, stretching their spines, the books woke up. And they began to talk. They chattered away, telling each other all the stories that had been tucked inside their covers, and as they did so, they made discoveries. They realized that the same story could be told and retold in different places, eras, and languages. They found threads connecting volumes shelved in entirely different sections, and they discovered that, when two stories combined, they could create something entirely new. And there was something else, too: In moments when their conversations lulled and lapsed, the books became aware of the stories that were missing from the library—missing not because they had been lost or borrowed, but because they had never been told. Not yet.

This allegorical library springs to mind after a conversation with Liz Allen, associate director at Research Consulting in Nottingham, UK, and a visiting senior research fellow in the Policy Institute at King's College London, UK. Allen is one of a growing community of thinkers working to reshape the way researchers share their results and the way funders measure the impact of the work they support. "The scientific corpus that we have currently is very fragmented" said Allen. "That shouldn't be, and I hope it won't be in the future."

All the users of research should be able to trust what they see—and have transparency around the provenance of that research: Where's it from? Who wrote it? Who's been involved in it? Who funded it?
- Liz Allen
The body of scientific literature is enormous, and increasing fast: Almost a million new articles are indexed every year, according to an analysis published in Quantitative Science Studies in 2024. Human researchers can make connections to these papers one by one, but it's slow going, and limited to the particular branches of their expertise. As academic publishing transforms from its leatherbound, gold-embossed past into something more open, networked, and social, however, Allen and others may be on the brink of bringing a scientific version of this fairytale library, with its books and journals that communicate with each other, to the real world of research.

But what will it take to make papers that can 'talk' to each other? The answer sounds mundane: metadata. By thoroughly tagging research artefacts with all the topics they touch on, linking them with underpinning research data, and coupling them with comprehensive author and provenance information, publishers enable connections between researchers and results—even those that might not be obviously related.

AI Peer Reviewers

The science journals Critical Care Medicine and Biology Open and the science platform Research Hub have been testing how paying reviewers affects the science publishing process, while Research Hub has also been using AI reviewers. Reporter Geoff Marsh found out more.

LISTEN:
Full Podcast
This, inevitably, is where AI enters the story. Artificial intelligence technologies can quickly aggregate and summarise vast quantities of research outputs and insights all at once, across disciplines, languages, and decades, bringing new opportunities for a multitude of audiences to access and use research findings.

"Learning algorithms and search engines are already able to rapidly access large corpuses of scientific literature and make connections," says Allen. Such technologies are already starting to make connections between research areas, help identify gaps in our existing knowledge, and help to shape promising new areas—including, for example,making links between previously unconnected disease pathways and mechanisms to accelerate the development of new treatments or drug targets.

Some publishers are already partnering with AI developers to help make connections across the literature. Oxford University Press, Taylor & Francis, Wiley, Elsevier, and Springer Nature have all announced AI tools and partnerships. Elsevier recently launched its ScienceDirect AI tool, LeapSpace, that can synthesize millions of papers to generate fully-cited answers to natural-language queries. Meanwhile, many authors have objected to their work being sold to AI developers without notice or compensation.

Fraudulent Villains

But back to our magical talking library for a moment because, as in any good fairytale, there must be a villain. And in this one, the villains are untrustworthy books and articles—those that tell false or unsound stories about the world and about themselves. In the scientific world, these take the form of fraudulent papers, which erode the credibility of publishing and of the scientific enterprise. "The world of publishing is seeing a rapid increase in the incidence of paper mills and the creation of fake or fraudulent content—in the pursuit of motivations to publish research—all of which threaten to undermine trust in science and research," says Allen.

Some such publications are generated by AI or written with ill-intent. But others fall into gray areas unintentionally incentivized by the foundational structures of academia. "A lot of it is being driven by the pressure for people to publish to support career progression, but if you combine this with publishing business models that allow researchers to pay to publish, then there's a model there that can and has been exploited by some bad actors in the publishing system" Allen says.

What is essential is that there are trust markers and validations built into the scholarly publishing system. With all our developments in making content more rapidly and openly available, the benefits only really follow if readers and the users of research have assurances that they can trust what they see and read. "All the users of research should be able to trust what they see—and have transparency around the provenance of that research," says Allen. "Where's it from? Who wrote it? Who's been involved in it? Who funded it?"


CRediT where credit is due
The Contributor Role Taxonomy helps to fairly identify individual input in research collaborations.
Credit: Patricia C. Adams
One simple way to help build provenance into the research metadata is to ensure that there is a clarity around the different contributions to published research—beyond being listed as an author, which reduces the nuance and complexity of the scientific endeavor and diversity of contributions into one-dimensional strings of names. Junior contributors often find themselves sandwiched between more senior authors or not included at all. Some disciplines list authors alphabetically; others list authors, in principle, based on contribution. Members of large projects—especially in physics—may find their names lost in the fine print of an author list that credits entire collaborations and can run to hundreds or thousands of names, regardless of each individual's specific involvement with the work described in the specific paper.

There are many benefits that can follow from adding more information about research contributions to published research output. As Allen and her colleagues wrote in 2019, providing accessible information about contribution helps to "prevent questionable, guest, and 'ghost' authorship" and "to provide a way to make sense of the increasing number of authors listed in research articles in many areas of science; and to provide visibility to early career researcher contributions where a 'first author' paper may be elusive."

Shining a Light on Hidden Roles

Following a 2012 workshop hosted by the Wellcome Trust, UK, and Harvard University, Cambridge, Massachusetts, Allen and a group of colleagues joined together to develop a simple way to classify and capture at the point of publication, the various roles involved in research. The resulting Contributor Role Taxonomy, or CRediT, sets out 14 different roles that are typically involved in research and those performed by individuals listed as 'author' of published research. Importantly, not only does CRediT have to potential to add a further layer of provenance and accountability to research output, but it helps to shine a light on some of the traditionally more hidden roles in research—like data curation, methodology development, visualization, and software—that might previously have been relegated to an acknowledgements section at the tail end of a paper. In a small way, this simple piece of research information can help to support initiatives that aim to recognize the diversity of output and contributions to research as part of more holistic approaches to research assessment.

It's important for researchers working across all disciplines and in a diversity of roles to have simple ways to share and showcase what they do.
- Liz Allen
"It's not all about being first author on a Nature paper," says Allen. "It's important for researchers working across all disciplines and in a diversity of roles to have simple ways to share and showcase what they do."

Today, many of the biggest—and smallest—publishers require authors to add contributor information alongside authorship as part of the submission process, including Elsevier and Public Library of Science (PLOS). CRediT was codified as an ANSI NISO standard in 2022 and has already been translated into 13 languages, with more on the way.

The trust conferred by systems like CRediT are critical parts of the infrastructure enabling researchers to make connections—and generate new ideas—across scientific publishing. "I think the next few years will be pivotal, in terms of making connections and discovering things that will make a positive difference to people, society—everything" says Allen, "The important thing is that we need to build in markers of trust and provenance from the start."

Lead image credit: Triff.