How To Fix Peer Review
Post-publication peer review and other innovations could mend the broken science-publishing system.
by Foundational Questions Institute
March 11, 2025
“We haven’t quite completed the transition that should have happened in 1995, when the web exploded,” says Tony Ross-Hellauer, leader of the
Open and Reproducible Research Group, at the Graz University of Technology in Styria, Austria, when asked how scientific publishers could improve academic journals and the peer-review process. “We’re still largely tied to what mirrors the paper format that was established in 1665.”
The
first scientific journal celebrates the 360th anniversary of its launch in March, this year. In
part 1 of this series of articles examining science publishing, FQxI considered the evolution of scientific journals over the centuries, against the rise of online preprint servers, three decades ago.
Part 2 of the series outlined the problems with peer review as revealed by a survey of 73 FQxI members and a series of interviews with scientists and journal editors, conducted by reporters
Brendan Foster,
Miriam Frankel,
Zeeya Merali and
Colin Stuart. Issues raised included a lack of good reviewers with expertise that matched the topic of research papers, leading to an increase in the publication of low-quality papers, and the rise in the number of fraudulent papers, slipping through journals’ nets. This third and final part addresses possible ways to fix these and other problems, with researchers calling for a more open, transparent and community-driven approach that better harnesses available technology. Possibilities include potentially compensating authors and volunteer reviewers for their work, using AI to improve the algorithms that journals use in referee selection, and making ‘post-publication’ peer review, with ongoing community discussion threads, the norm.
Academics who serve as peer reviewers for most journals work as volunteers. Stretched for time, and under pressure to publish their own research, as many as
75% of first-choice reviewers refuse requests from journal editors to perform reviews. In FQxI’s survey, 70% of respondents reported turning down a review request in the past year that they would otherwise have accepted, due to a lack of time. So it makes sense that many researchers have proposed compensating reviewers, to incentivise academics to take on this task, particularly when working with giant corporate publishing houses. “As authors, we write and typeset for free, and then as reviewers, we review papers for free, as well,”says Kasia Rejzner, a mathematician at the University of York, UK and president of the
International Association of Mathematical Physics. “With the number of hours everybody spends, it would be fair to get some payment.”
So would just offering cash to both authors and reviewers be the simplest way to solve the problem? It’s a question that the founding editors of the non-profit journal
Quantum wrestled with before deciding against giving monetary payments, says co-founder Lídia del Rio, a physicist at the University of Zurich, Switzerland. She notes that offering money risks creating a new set of bad incentives, encouraging authors to write even more (possibly low-quality) papers and tempting reviewers to accept poor papers and churn out faster, less thoughtful review reports. Only 14% of FQxI’s survey respondents believed that authors should be paid; the most common reason given for opposing this proposal was that researchers are already paid by their universities to produce and disseminate their research and so do not need additional monetary reward. “This is a hot-button topic, but if you look at any of the contracts for academic scientists, part of the contract is service to the profession,” says David V. Smith, a neuroscientist at Temple University in Philadelphia, Pennsylvania, who has
written about ways to improve peer review, based on crowd-sourcing opinions from academics on X (then Twitter).
By contrast, 59% of survey respondents agreed that reviewers should be compensated, in some manner, for their time and effort. However, many, including Smith, Ross-Hellauer, and Jorge Pullin, a physicist at Louisiana State University in Baton Rouge, who has sat on multiple editorial boards, argue that publishers would probably just pass this additional cost on to paper authors. Academic authors are routinely required to pay ’article processing charges,’ ranging from US$100 to US$10,000, to publish their papers, and this fee would likely rise further. Some have suggested that government funding agencies could help to shoulder the burden, but, Pullin notes, that seems increasingly unfeasible, given the current political and financial climate in many countries.
Paying reviewers may have another unwanted side-effect, inadvertently creating a new profession. If researchers find that they can make a better living by just reviewing papers, they may lose both the time and incentive to do their own original research, says Fengyuan Liu, a doctoral student in computer science at New York University in Abu Dhabi, who has examined fraud in science publishing. But to maintain the quality of reviews, “you want reviewers who are active researchers,” he says.
Smith notes that researchers who have published their own papers in journals have already benefited from the peer-review system, “and in a sense, owe it to do some reviews, to make sure the system is back in balance.” Where this becomes problematic is when some researchers are carrying out more than their fair share of reviews. To counter this, Smith proposes that journals could track how many reviews an individual carries out for them, and compensate those that perform extra reviews, by waiving their article processing charges for future papers that they may submit to the journal, as authors. Another way that journals could compensate reviewers, without paying them directly, is for each publisher to make payments into a fund that universities could then access to pay researchers, suggests Ivan Oransky, co-founder of
Retraction Watch, a database that tracks academic misconduct.
Perhaps even more valuable than these kinds of indirect compensation credits, would be instituting a system that publicly rewards the researchers performing high-quality peer reviews, so that good reviewing carries career-progression currency. Raissa D’Souza, a computer scientist and engineer at the University of California, Davis, and a founding editor of the open-access journal
Physical Review Research, published by the American Physical Society (APS), notes that each year the APS gives out awards for their outstanding referees, and winning is considered to be a great honor. She suggests that all journals should publish the names of their reviewers, so that researchers could list this information on their academic profiles, such as on ResearchGate, gaining prestige. This information could potentially contribute to a “reviewer ranking” that could be leveraged for career advancement, she says.
Taking things a step further, D’Souza would also like to remove the veil of secrecy surrounding reviewers, who usually submit their review reports anonymously. Some journals, including Springer Nature’s
Nature Communications publish these reports, and the exchanges between authors and reviewers, alongside papers, with the reviewers named (if they consent), in an effort to increase transparency. “Having accountable peer review, where you sign your name, and publish the comments that ensued throughout the peer review process, would curtail a lot of the bad behavior that people engage in when they are anonymous,” says D’Souza, including being rude, showing bias, making unfair demands to be cited in the paper, and writing flippant review reports that do not seriously consider the work.
The flip side of naming reviewers, however, is that even fewer researchers will be inclined to agree to review, for fear of being pulled into protracted arguments with disgruntled authors, cautions Pullin. “When you get a paper that is a disaster, and you say that it is a disaster, then this person is probably going to fight you forever, and they’re going to start harassing you at your office and at your home,” he says.
There have also been efforts to create a fairer way credit all those who contribute to a particular piece of research, but whose efforts can be easily disregarded, with the development of the Contributor Role Taxonomy, or
CRediT, in 2015, supported by the Wellcome Trust and the Alfred P. Sloan Foundation, which journals and publishers can—and often
do—now implement. The project was conceived at a workshop at Harvard University that brought together researchers, institutions, publishers and funders to come up with a simple 14 role taxonomy to better capture and describe roles that are often overlooked, from conceptualisation, through project administration, to writing, reviewing and editing papers. “The core is contributorship, rather than authorship,” says Ross-Hellauer, who would like to see the system employed more widely. “This is for people who contributed in some way to a paper, maybe doing data analysis or running experiments—usually juniors, masters students or PhDs, who don’t get listed as authors.”
Another common complaint of authors is that the assigned reviewers do not always seem to be qualified to judge their work. Physicist Abhay Ashtekar of Pennsylvania State University in State College, who has sat on multiple editorial boards, notes that a large part of the problem is that journal editors are not always active researchers in the field, themselves. They can then become overly reliant on the recommendations of unsophisticated computer algorithms that make shoddy matches between paper topics and potential reviewers. The first step to rectifying the situation is to ensure that editors are themselves working scientists, who are subject-area experts, and then give them improved computational tools, powered by well-trained AI. “This could be low hanging fruit, using better datasets to train AI on an ongoing basis, so that the programs that journals use to select reviewers are better,” Ashtekar says.
Better training could help human intelligences too, not just AI. Multiple FQxI survey respondents commented that reviewers should be told not to be rude to authors during the review process—and that their reports should be thrown out if they refuse to be polite and respectful. “An interesting aspect of the system is that most peer reviewers have never received any training to be a peer reviewer,” says Miguel Rois, a psychologist at St. John’s University, in Staten Island, New York, who is a member of the board of directors of the
Center for Scientific Integrity. Perhaps universities should invest in training young researchers and explaining that reviewers should be thinking of ways to help the author improve, says Rois, not thinking, “‘how do I screw this author?’”
Other innovations may be costlier, but are vital to changing the way that science is done, communicated and reviewed effectively, says Ross-Hellauer. He is calling for more funds to incentivise researchers to reproduce work. He also believes that rather than waiting to produce a final paper when all work is done, it would be beneficial to move towards incremental publishing—sharing code, data, and results as scientists go along—and discussing each element of a project openly with peers, to receive ongoing feedback.
By far the greatest concern of both those interviewed and FQxI survey respondents was the huge profits made by big publishing houses, with multiple survey respondents noting that they avoid reviewing for, or publishing in, for-profit journals. Oransky, who is on the advisory board of the preprint server arXiv notes that the academics who serve as arXiv’s moderators do so as volunteers, without complaint. “People are happy to spend many hours doing that, when it’s not for some corporate-owned publishing house,” he says.
D’Souza says that some universities are fighting back against megapublishers. “At the University of California, we’ve got 10 campuses, and we’ve got major purchasing power when it comes to software tools, or licenses, or publishing and there have been a lot of lawsuits back and forth between Springer Nature and the UC system, about open access fees, about predatory journal practices, and library subscription fees,” she says. In 2024, a UCLA neuroscientist
filed a class action lawsuit against the six largest academic publishers (Elsevier, BV; Wolters Kluwer NV (WK); John Wiley & Sons; Sage Publications; Taylor & Francis Group, Ltd; Springer Nature AG & Co KGaA; STM and other publishers to be named later), alleging that they constitute a “cartel” that is “conspiring to unlawfully appropriate billions of dollars that would have otherwise funded science.”
Many survey respondents favor non-profit “arXiv overlay” journals—online journals that review and then link to final revised preprints on the arXiv, rather than typesetting and printing accepted papers—such as
Quantum and
The Open Journal of Astrophysics. These journals keep costs low by effectively “subcontracting all the publication, archiving, etc, to the arXiv, so they just concentrate on refereeing,” explains Pullin.
Pullin, Rejzner and D’Souza foresee traditional peer-reviewed journals eventually being replaced by an arXiv-like repository, with a rubber stamping agency that would evaluate and grade research papers. The next step of evolution may be large-scale reviewing by consensus. “I can imagine a model without journals, but with something like arXiv, with a commenting option, where people respond to papers more on a discussion-forum basis,” says Rejzner. She is not the only one inspired by social media. “What I would really like to have is a ’Facebook of Science,’ a place where I can see what my colleagues are doing,” says Pullin. “What are they reading? What interests them? Because these are very good people, it would be really useful information.”
There are a growing number of journals making innovations in that direction, by instituting some form of ‘post-publication peer review.’ In this approach, manuscripts are published upon submission and then left open for comments and review by anyone. “Just like peer review, you need to respond and may need to revise your manuscript after it is published,” says Liu. “So publication is not a done deal.”
The non-profit peer-reviewing journal
eLife aims to
publish without gatekeeping, and so made the controversial choice not to reject papers, on the basis of peer review. Instead, when manuscripts are submitted, their expert editors decide whether they will accept the paper for reviewing. If they do accept, then reviewers and editors produce assessments of the work. At this point, if the authors wish to proceed (and pay an optional processing charge), the reviewed preprint is published by
eLife, accompanied by the reviews and a summary from the editors about the paper’s significance. Authors may then choose to update their articles, based on this feedback, at which point
eLife can update its assessments. At any point, authors can choose to end the reviewing process and publish their final version of record.
“Post-publication peer review is one of the best things that ever happened to peer review,” says Rois. He notes that the strategy avoids good papers being rejected because of the bias of one reviewer. “In my view this is the future,” adds Ross-Hellauer.
There are many suggestions for modernising peer review to help to improve the quality of papers that make it into journals, but these do not address the deeper issues that created the publish-or-perish culture that drives the production of bad papers, in the first place. Multiple respondents to FQxI’s survey felt that hiring committees and funding agencies need to take a more holistic approach when assessing their worth, giving wider recognition to other elements of their work—teaching, community-building, public outreach and more—rather than just focusing on their publication metrics.
Attempts have been made to move away from an over-reliance on publication metrics, with reforms outlined by
DORA, the Declaration of Research Assessment in San Francisco, in 2012, and more recently by
CoARA, the Coalition on Research Assessment in Europe, in 2022. Oransky approves and believes that it is vital that there is a seismic shift in the culture of how academics are judged. “Institutions, funders, publishers and even industry need to really rethink the primacy of excessive citations in every ranking system and in every decision they make about an individual researcher, policy or grant priority,” he says.
Until that wholesale philosophy changes, the tweaks already being made by journals and called for by researchers will only treat the symptoms, not the causes of the flood of bad research. “Right now, we’re getting much better at building sewage-treatment plants at the mouths of rivers, which is good,” Oransky says. “But we need to stop encouraging people to dump sewage in the river.”
FQxI publishes a book series in partnership with Springer Nature. Miriam Frankel and Zeeya Merali have reported for Nature
and Zeeya Merali has reported for Science
.