Reading time: 8-10 minutes
This blog has been critical of MDPI in the past. That came about honestly: in 2021 I looked at how inflated a journal's Clarivate Journal Impact Factor was relative to a metric that has built-in rank normalizers, the SciMago Journal rank. In that analysis, MDPI was far and away the most severe in terms of Impact Factor Inflation, having significantly different citation behaviour compared to all not-for-profit publishers, but also compared to for-profit Open Access publishers like BioMed Central (BMC) and even Frontiers Media ("Frontiers in ____").
Impact Factor Inflation is a metric of anomolous citation behaviour. It reveals publishers whose Clarivate Journal Impact Factors (IF) are much higher than expected if one normalizes for the network of journals citing that publisher (using Scimago Journal Rank = SJR). The SJR formula does not reward self-citation, or receiving many citations from only a small pool of journals. Thus when a journal has a very high Impact Factor compared to SJR (suggested litmus test = 4x higher), it reveals when that Impact Factor has been inflated by self-citation, or small self-citing circles of authors/journals.
I followed that analysis with a simple poll asking Twitter users what their opinions were of various publishers. I repeated that poll in 2023 on both Twitter and Mastodon getting basically the same result (if anything, more settled into camps): nearly everyone labelled MDPI a somewhat or outright "predatory" publisher.
Opinions polls of academic publishers asking "What do you think of publisher ___?" conducted in 2021 and 2023 on Twitter and/or Mastodon.
Around the same time, Paolo Crosetto wrote a fantastic piece on MDPI's anomolous growth of Special Issues. Year-after-year growth of Special Issues exploded between 2020 and 2021, going from 6,756 in 2020 to 39,587 in 2021. The journal Sustainability has been publishing ~10 special issues per day: and keep in mind a special issue is often comprised of ~10 articles. As Crosetto put it: "If each special issue plans to host six to 10 papers, this is 60 to 100 papers per day. At some point, you are bound to touch a limit – there possibly aren’t that many papers around... That’s not to talk about quality, because even if you manage to attract 60 papers a day, how good can they be?”
The reason the special issue model of publishing exists is because there are always ideas floating around at conferences and in research circles about what direction to take the field. These ideas are often an individual's synthesis of the literature, and their understanding of both unpublished data and why contradictory results in the published literature might exist. Special Issues, at their inception, were an opportunity to give authors a chance to put forth ideas that weren't built on as solid a data foundation, but were worthy topics to bring to the research community. They were article collections to get the field to pause and digest the many papers that were recently published, and figure out what direction(s) were going to be most fruitful moving forward.
Publishers like MDPI (and others) abused that loophole. They created a publishing model built off appealing to the publish or perish mentality, and the need for researchers to buff out their CVs with journal articles. So they created a Ponzi-esque system where they aggressively recruited researchers to host special issues (i.e. act as editors for free), petitioning for articles from their network of colleagues. The issue isn't in the model: it's in the frequency. At some point, all the burning questions have been discussed, and the field is waiting for new data, not re-hashings of the same review articles that were written just a year ago.
In the absence of new ideas, authors instead began using Special Issues as outlets for research that wasn't quite ready for publication. Knowing that Special Issues are under less scrutiny, it became common for authors to publish work that could have used an extra experiment, or some more time for thought and interpretation. But hey... publish or perish.
I say this having been first author on a publication in a special issue in Frontiers in Immunology, where myself and my co-author both took advantage of the invitation to submit observations we'd made that didn't really fit in any other article. It's a publication where we got two solid peer reviewers, but a third (which is a standard more rigorous journals often strive to acheive) might have caught one of the key inaccuracies of our discussion: We put forth the idea that plant-feeding ecologies were microbe-poor by virtue of the host plant's immune system. The idea was that plant-parasite species might have reduced pressure on their immune system by outsourcing microbe suppression to host plants, which would explain why many immune genes are pseudogenized in those plant-feeding parasite species.
And that's the critical flaw with the way the MDPI model abuses the Special Issue loophole: ideas that aren't ready for publication get swept through peer review anyways because everyone understands that special issues are treated with less scrutiny. And to be perfectly clear, MDPI very intentionally kept it that way: article processing times in MDPI are half or less that of other publishers. MDPI's lax peer review is so anomolous, Christos Petrou had to remove MDPI from his analysis of publisher turnaround times in order to paint a meaningful picture of the data.
Analyses by Christos Petrou and Paolo Crosetto both found that MDPI had massively coordinated a reduction in article turnaround time in recent years, with just ~37 days from 1st submission to articles being accepted.
Edit 25/03/2023: beyond processing articles quickly, rejection rates have also decreased. Dan Brockington's analysis in Nov 2022 shows how this disproportionately increased MDPI revenues for 2021.
And so there is a lot to discuss moving forward with how to deal with MDPI. There have already been at least two world-ranked universities whose science faculties have rescinded support for MDPI, telling their researchers that articles published in MDPI would not be considered towards their academic CV.
It is high time that we sit down and consider whether such explosive publishing output is really advancing science meaningfully. Indeed, the process is not harmless: not only because it takes research funds and puts them towards article processing fees of articles that are of questionable necessity... but it also dilutes the literature with hundreds of articles that could have used more care and attention before final publication. Paolo Crosetto questioned whether there are even enough genuine papers in-progress to fill the demands of journals like Sustainability, who publish hundreds of articles per day through special issues; and I agree. It's very reminiscent of "salami publishing," where one publishes piecemeal results one-by-one rather than waiting for a complete story that has had loose ends tied off.
How unique is MDPI in this round of Clarivate cuts?
It is therefore not surprising to those of us that have been following MDPI's trajectory over the last few years that MDPI's flagship journal IJERPH finally got burned: this really could have been any MDPI journal, and for all we know more delistings are coming. Clarivate has said they have only finished analyzing ~50 of the ~500 or so journals of concern that came up in their 2023 round of considerations. So MDPI could have more journals on the chopping block.
It should also be said this series of cuts isn't unique to MDPI. Another publisher seriously affected by this round of delistings is Hindawi, who had ~20 journals delisted from Clarivate in early 2023. That may make MDPI's two delisted journals seem small by comparion, but to emphasize how many articles delisting IJERPH affects: IJERPH publishes more articles per year than PLOS One, and is second only to Springer Nature's mega-journal Scientific Reports in terms of total article output among biomedicine-related journals.
What's next then?
This is really a key turning point for MDPI: for better or worse, the current publishing system places an immense amount of value in Clarivate and Web of Science as an authority on publisher integrity. In some ways, the damage is already done: even if MDPI addressed Clarivate's concerns and got IJERPH reinstated to Web of Science, who would really look at IJERPH the same now? Especially when so many other comparable publishers are available (such as Frontiers Media, as is apparent from the Table above). Per my IF/SJR analysis, per Christos Petrou's article turnaround time analysis, and per polls of public opinion, Frontiers has somehow maintained a reputation that is noticeably different in citation behaviour and public perception, but still keeps reasonable article turnaround times and hasn't earned the ire of the entire research community (yet?). MDPI's ability to sell itself as a highly-respected publisher has taken a serious blow with this delisting, particularly as the same behaviours that got IJERPH delisted are copy/pasted across all MDPI journals. However MDPI could still right the ship with a serious overhaul to its business approach, and its editorial practice.
However... that's asking a lot of a publisher that spent the last decade coordinating a massive overhaul to its structure to reach this exact point. For now, I will continue to boycott MDPI requests to review, submit articles, or host special issues (of which I've had several in the past few weeks alone). While I'd encourage all readers to join me, the power we give to publishers like MDPI, or to authorities like Clarivate, is the power of collective action. Either we follow MDPI's example, or we follow the example of world universities and Clarivate in removing MDPI from our personal list of reputable publishers.