Risky Business: How Science Plays Things Too Safe

April 22, 2026
by George Musser
Risky Business: How Science Plays Things Too Safe
Funding agencies, AI, and even scientists themselves favor tried-and-tested research avenues over exploring new ideas, to the detriment of progress.
by George Musser
April 21, 2026
In 2003 Brandon Ogbunu was visiting graduate schools and trying to decide where to go. He recalls the algorithm for success that one prospective advisor offered him: "'Pick the virus family,'" he began. "'Work on the virus family. Pick a virus family related to it.'" It went on. "'Then you write your K Grant,'" requesting funds for starting a biomedical lab. "Well, he mapped out 10 years of my life," says Ogbunu. Up till then, his professors and mentors had been nerdy and curious, never careerist. He remembers thinking: "Where's the risk, my man? Aren't you supposed to be chasing something?"

Now a tenured evolutionary biologist and an expert on science and society at Yale University, in New Haven, Connecticut, Ogbunu says he realizes those words were spot on. "The sad part is that it was good advice," he says. "But it was a good advice for a broken incentive system."

Who's to Blame?

Ogbunu is hardly the only person to fret that science is timid. "There's lots of different types of risk, and it varies tremendously from who you ask what it is," says Carl Bergstrom, an evolutionary biologist at the University of Washington, in Seattle, and a leader in the science of science. "Whatever it is, people think we don't have enough of it." Everyone thinks everyone else is to blame. Bergstrom notes: "If you ask researchers, 'Why don't you do high-risk research?' they say, 'Well, the funding agencies won't fund it.' If you ask the agencies, they say, 'Well, the researchers won't propose it.'"

If you ask researchers, 'Why don't you do high-risk research?' they say, 'Well, the funding agencies won't fund it.' If you ask the agencies, they say, 'Well, the researchers won't propose it.'
- Carl Bergstrom
Bergstrom and his co-author Kevin Gross, a statistician at North Carolina State University, in Raleigh, find that both have a point. In 2021 they analyzed the grant review process, publishing their findings in PNAS. Because funders judge the quality of research before it is undertaken, they are naturally inclined to support projects that are likely to succeed—which is the very definition of risk-averse. Bergstrom quotes a European Research Council document about a program to fund high-risk, high-reward research. It boasted that 80 percent of their projects produced major advances. "They don't even notice the irony," Bergstrom says.

Bergstrom argues that agencies should give people liberty to explore their interests and evaluate them post hoc. That would also avoid the universal problem of people spending more time writing grant proposals than actually doing the work. "I'd like to see, basically, more funding of people," he says.


Carl Bergstrom
University of Washington
Credit: Kris Tsujikawa
But that same dynamic—that people like taking risks only if they pay off—applies to researchers, too. In 2024 Gross and Bergstrom analyzed the pressures that researchers collectively impose on themselves, publishing the results in PLOS Biology. Scientists play it safe to make sure they have something to show, because they are judged on their output. And who does the judging? Other scientists. People's personal livelihoods depend largely on the assessment of their peers. And even the most forgiving peers are always going to prefer an actual paper to a story of how they took a risk and it didn't work out.

AI Peer Reviewers

Journals Critical Care Medicine and Biology Open and the platform Research Hub have tested paying reviewers, while Research Hub has also trialed AI reviewers.

LISTEN:
Full Podcast
If instead scientists billed research hours like lawyers, they might have more job security and take more risks. But most scientists—those in academia, at any rate—would bristle at that. "I don't want anybody looking over my shoulder and counting face time," Bergstrom says. The autonomy that scientists prize exacts a cost. "If you let scientists self-govern, they won't choose the reward scheme that makes science most efficient," he says.

Artificial intelligence is exacerbating the problem. In January, James Evans of the University of Chicago, in Illinois, and colleagues published a study in Nature that looked at 41 million research papers, using a large language model to flag those that were written wholly or partly by AI. They found that authors of such papers published over three times as many articles as those who did not avail themselves of AI, and made faster progress in career advancement. "Junior scientists are facing incentives that push them to do things that don't align with what we think are appropriate norms of scientific rigor," Bergstrom says. Leaving aside the ethics, Evans and his colleagues found that AI tools favored tried-and-tested fields over exploring new ideas.

Pivot Penalty

Many researchers do have job security: tenure. Last year, Dashun Wang at Northwestern University in Evanston, Illinois, another leader in the science of science, and his colleagues tracked the publication records of 13,000 faculty members in science and engineering in the U.S. in the 2010s, publishing their findings in PNAS. To no-one's great shock, they found that junior faculty pumped out papers in the years leading up to tenure. "That just shows you how powerful this whole incentive system is," says Wang.

What came next was more surprising. Tenured faculty didn't slack off. Those in lab-based fields such as biology and chemistry maintained or even intensified the pace. Those in more solitary fields such as math and economics publish less—not because they're working less, but because they've branched out. "After tenure, people take more risk," Wang says.


Dashun Wang
Northwestern University
One of the biggest risks they can take is to shift fields. Last year, in Nature, Wang and a somewhat different group of co-authors scrutinized what they called the "pivot penalty." They reviewed 26 million papers from 1970 to 2000 and 1.7 million U.S. patents from 1985 to 2000, giving special attention to the disciplines, as classified by the databases. They found that papers are cited less when authors deviate from their home discipline. Just over seven percent of papers by veterans of a discipline enter into the top five percent of cited papers, versus two percent of those by newcomers. In the top one percent and 0.1 percent, the imbalance is even more striking. There is a dose-response effect: the further the author's departure, the worse the paper's prospects. As a case study, the paper considered COVID-19 research, since the pandemic encouraged a lot of scientists to pivot. They faced the same penalty.

Schrödinger's What Is Life? was a pivot; The Origin of Species is a pivot. All the foundational things that we do come from pivoters.
- Brandon Ogbunu
The authors argued the explanation has less to do with resistance to newcomers than with the inherent difficulty of finding your way in a new discipline. For instance, both younger and older scientists suffer the same penalty, so it can't just be a person's reputation, since young people are as unknown within their own field as outside it. As a clue, the authors found that pivoting papers are unconventional—mismatched to the standards of the field they are entering, as judged by the types of references they cite. So, even when they introduce fresh ideas, they do so in a way that is hard to assimilate into prior knowledge. "With high-pivot work, you see a very clear low degree of engagement with established knowledge, and that's quite plausible if you get into a new area that you may not know as well," Wang says.

Yet science has historically been driven by pivoters. "Schrödinger's What Is Life? was a pivot; The Origin of Species is a pivot. " Ogbunu says. "All the foundational things that we do come from pivoters." But is that a thing of the past? Wang's group found that the pivot penalty has only worsened over the decades—reflecting, he thinks, increasing specialization. He pulls the latest issue of Nature from a pile on his desk. "I open it up and I don't understand 95 percent of the titles," he says.

Rage Against the Machine

But here's the twist: The academic job market has gotten so brutal that it inverts some of the older wisdom about risk-taking. Bergstrom says he used to advise grad students to play it safe, but a student recently told him: "'I'm playing such a difficult game that I'm almost certain to lose that I'm willing to gamble quite aggressively.'"

Scientists can and should have an edge, like a rock musician, argues Ogbunu. "When a rocker goes corporate, it just feels wrong," he says. "You're supposed to be raging against the machine."

I'm not saying you've got to be violent; I'm not saying you've got to be loud; I'm not saying you've got to be obnoxious. But you're supposed to be jamming
- Brandon Ogbunu
But while a musician shouldn't sell out to the suits, they do have to sell records. Obgunu pivots; he leans into what he calls a "punkish" attitude; he writes a column for the science magazine Undark even though it brings no professional credit. He can do that because he has achieved conventional academic success. Ogbunu now tries to help his students to walk that line. Play the academic game—you have to. Take risks, but be rigorous. "It better be right, and it better be done," Ogbunu says. "In fact, if you're going to be like me, it better be extra right and clear, because people are going to try and discredit you."

Above all, scientists should embrace the sense of difference—of being someone who exasperates parents and friends with endless questioning—that first drew them to their career path. "I'm not saying you've got to be violent; I'm not saying you've got to be loud; I'm not saying you've got to be obnoxious," Ogbunu says. "But you're supposed to be jamming."

Lead image: Brandon Ogbunu, Yale University.