Beating Back Fraud in Mathematics

November 18, 2025
by Nicola Jones
Beating Back Fraud in Mathematics
A joint working group of mathematicians from the IMU and the ICIAM recommend ways to escape the "parallel universe of fake papers."
by Nicola Jones
November 18, 2025
When people think of scientific fraud, the field of mathematics doesn't ordinarily leap to mind. Mathematicians don't publish very often, and face less pressure to 'publish or perish' than many academics. They don't often fight for spots in Nature or Science. There is less money involved, too: mathematicians have relatively small grants, require no expensive laboratories, and risk no pharmaceutical profits with their work. A faked mathematical equation is surely easier to spot than, say, a set of faked experimental data. Why would anyone publish a fraudulent mathematics paper?

But the field is not immune to ethical problems—in fact, it is unusually vulnerable to them. In October, a joint working group of the International Mathematical Union (IMU) and the International Council of Industrial and Applied Mathematics (ICIAM) released a scathing report on poor publishing practices in mathematics. They published two companion papers in the Notices of the American Mathematical Society taking a hard look at fraud and how to fight it.

Nicola Jones spoke with the head of that working group, Ilka Agricola, chair of the IMU's committee on publishing and a physicist and mathematician at the University of Marburg, in Germany.

Is there really much fraud in math?

For a long time I think mathematicians would have said that fraud was not a major problem in mathematics, for the reasons you mentioned. But now we understand as a community that yes, math has a fraud problem. It's probably fair to say it's not as large as in some areas of medicine or life sciences, but mathematics is particularly vulnerable because we don't publish so many papers. The amount of nonsense, fake and AI-generated papers that appear in predatory journals is, by our estimate, now larger than the number of real, serious research papers in math.

How does this happen? Who is producing these articles, and why?

It starts with predatory journals. Also there are mega journals that publish thousands of articles per year that most of my colleagues have never heard of. They are full of nonsensical content. These papers are not made for being read; they're made just for blowing up bibliometric numbers.

The amount of nonsense, fake and AI-generated papers that appear in predatory journals is, by our estimate, now larger than the number of real, serious research papers in math.
- Ilka Agricola
The predatory journals then try to lure academics into this parallel world of fake science. We all get many emails every day with invitations to publish in special issues or conferences or journals nobody's ever heard about. You put your first paper on the arXiv (a popular preprint server), you will get these phishing emails two days later at most. For young people who don't know what's behind this system, they might say, 'Well, why not? Let's try that. Right? Looks good.' Sometimes these journals put names on their editorial boards without asking these colleagues, to make it look like a reputable journal. Or maybe they asked, and the colleague didn't realize what kind of journal it was.

Why do these predatory journals exist?

For money. They can charge hundreds or thousands of dollars per publication. This is not the normal model in mathematics publishing: We typically publish in journals that don't charge article processing fees to individual researchers. So I don't need to spend any money on an article processing charge unless I'm in dire need for publications.

A PhD student of mine needed less than 30 minutes to find a social media channel where he could buy articles, authorships or citations. It's all there for sale, and then typically payment is in crypto coins.

So, some academics are innocently lured into the system, maybe publishing papers that aren't very important, and others are intentionally buying citations?

Yes. Some people feel they need citations or publications for their career. I heard about a person having a tenure track evaluation and getting, a few months before the evaluation, the news that they needed half a dozen papers. So, the person bought those papers. And I cannot blame the individual for that.

Shaking Up Science Journals

Editor-in-chief of the biology journal eLife Timothy Behrens tells Miriam Frankel about their experiments with post-publication peer review.

LISTEN:
Full Podcast
Review times can be long, sometimes more than a year. This is sometimes too long for someone trying to get a PhD, but their institution requires published papers to graduate. So, then they publish in bad journals, hoping it will be faster. I have seen people retract their submissions from serious journals, sending them to predatory journals instead in order to fulfil the requirements for their PhD.

Another problem is academics listing fake secondary affiliations. What is that about?

People can be paid large amounts of money for adding another affiliation on top of their primary one. This can start at US$10,000 a year and can go up to $100,000. Maybe you'll have to be there for a week in the year or maybe not at all. Some Saudi Arabian universities are reported to have really successfully boosted their rankings, for a while, this way.

Is there an example of a particularly famous case of bad practice in mathematics?

Yes, it is well known in the mathematics community that there is a whole research area in which allegedly all the high players are part of the problem, running a citation ring.

Is it easy to make a fake math paper?

Creating something which looks like a research article to a non-expert is relatively easy, by using AI, by recycling images, by writing long calculations that impress the layman. There are generators that do this automatically—like Mathgen—that have existed for years. It's not new. But the numbers are increasing. Journals can get a whole wave of submissions by professional brokers.

One trick that is very common is that these papers give the appearance of being applied math. Of course, applied mathematics is important. Except that nobody's interested in these particular applications. It's totally unserious stuff. Totally pointless.

In 2023 the data analytics company Clarivate, which produces the influential Highly Cited Researchers (HCR) list, decided to omit mathematics from its analysis, because that list was—everyone agreed—flooded with a lot of unknown and suspicious names. Did Clarivate's decision have repercussions for the mathematics community?

Looking back, I guess we have to thank Clarivate, because this was really the wake up call to the math community.

The assumption of the community was that every metric—impact factor, citations, h-index, whatever—has a certain value, and then there's cheating, which is like an error bar on that number. What I am claiming is that the error bar is larger than the number you're measuring. And therefore it's pointless.
- Ilka Agricola
This November, Clarivate announced it made some changes to how it compiles its Highly Cited Researchers list, and as a result math is back on the board. Have they fixed the problem?

The fraudsters have been removed from the list. But is the list meaningful? I don't think so. Not at this stage. There's a lot of room for improvement.

I had a quick look. There are, for example, five mathematicians on the list from Germany. This is not the list of most influential mathematicians in Germany that I would produce. The first name is a mechanical engineer. Another is Peter Scholze (co-director at the Max Planck Institute for Mathematics in Bonn): we all agree that he's one of the most important mathematicians in Germany at the moment, he's a Fields Medalist (one of the largest prizes in mathematics) and he's doing great stuff, but he's doing great stuff in a very small, very specialized area of mathematics. The other three are from very interdisciplinary areas, very applied.

On the other hand, Terence Tao (a Fields medalist and a professor of mathematics at the University of California, Los Angeles) is not on the list anymore. That surprises me. He used to be on the list: he was my example to show obviously there were some extremely good colleagues on the list.

There are plenty of other bibliometric measures of academic success. Are they all flawed?

The assumption of the community was that every metric—impact factor, citations, h-index, whatever—has a certain value, and then there's cheating, which is like an error bar on that number. What I am claiming is that the error bar is larger than the number you're measuring. And therefore it's pointless.

Paying Peer Reviewers and Using AI for Reviews

The science journals Critical Care Medicine and Biology Open and the science platform Research Hub have been testing how paying reviewers affects the science publishing process. Research Hub has also been using AI reviewers. Reporter Geoff Marsh found out more.

LISTEN:
Full Podcast
As long as we don't get this parallel universe of fake papers under control, it's always going to infect whatever numbers we're producing. Policy makers, governments, funding agencies love metrics. Universities use them for public relations, for advertisement, for getting new students, for setting the level of tuition fees, because parents want to send their kids to good universities. The question is, should they trust a company (like Clarivate) to define who the good people are? They should trust the community instead.

Are there better ways of assessing who is doing important work in mathematics?

We have very good lists of reliable journals in searchable databases: zbMATH Open and MathSciNet from Mathematical Reviews. Some of the journals in the zbMATH database say "no longer indexed." This is a hint that there is a problem with this journal.

The American Mathematical Society has a list of fellows. And we have prize winners, of course. These are both indicators that someone is doing good work. Ultimately no one should rely on one parameter. Only relying on numbers would mean having perfect data, which we don't have.

It's kind of ironic talking to a mathematician about how numbers aren't meaningful.

We are not easily impressed by numbers. Our business is to question the numbers. If the method behind it is wrong, then you can prove anything with mathematics.

One solution you recommend is for granting agencies and universities to move away from using bibliometrics. Are there examples of good practice?

Practices vary a lot from place to place. The German Research Foundation (DFG), when you write an application, they don't want an h-index. They don't want citation counts. In the application, they ask you to give your 10 most important papers.

What can mathematicians do, themselves?

First of all, become aware of the problem in order not to fall for the phishing e-mails, not get published in predatory journals, don't join their editorial boards. Educate the younger generation so that they don't fall for it, that they learn about the dangers. Check the journals: being listed in zbMATH Open is one very good indicator of quality. Our paper lists lots of recommendations.

As long as we don't get this parallel universe of fake papers under control, it's always going to infect whatever numbers we're producing.
- Ilka Agricola
Can we beat back the tide of predatory journals?

Not easily. Creating a company and a website to publish fake science is way too easy. It's not dangerous, you know, it's not drugs, there are no machine guns. It's fraud, but it's clean fraud. That makes it attractive. And it's global. How can I go against a scam company or scammers anywhere over the world? I have no tools for that.

This is a huge market. It has customers, it has producers, and they are infiltrating the whole research system. It's a systemic problem.

What's the best way forward?

Most mathematicians are serious people. They are doing this because they believe in their science and they want to make some progress in it, and because they like doing it. But then we have people who want to make money. And some people who want to make a career without having to do the hard science. And then we have companies who make money from selling databases and analyses. All these people are creating something that makes our life so hard. But it's big business.

As long as there are incentives to cheat, we have a problem. As long as there's so much money to be made. It's not only frustrating, it really makes me angry.