This is part 1 of a 2-part essay on academic publishing. Once published, I will link part 2 here. Otherwise, you can subscribe to receive part 2 in your inbox for free.
The job of an academic scientist is to ideate research, propose doing the research to a funding body (i.e., submit a grant application), do the research, and publish the results of the research in an academic journal. If the journal is sufficiently prestigious, the publication can then be used to persuade future grant committees that your future research proposals should be funded. A side-effect of this cycle of publications and grants is that some scientific discovery occurs, which can then be used by biotech/pharmaceutical companies to actually benefit society.
Research funding not only covers research costs like expensive equipment and reagents, but also the salaries of all scientists involved and institutional overheads. This means that the livelihood of scientists (and the upkeep of universities) depends on them getting their work published in prestigious academic journals. Therefore, publishing being a fair and just system is of existential importance to many scientists. The problem is, the system is markedly flawed.
Financially, the system is backwards. Scholarly research is typically locked behind paywalls, and universities must pay huge sums in subscriptions to the big prestigious for-profit journals in order to access research produced by scientists, including by those from their own institute. If academics wish for their research to be published under an “open access” license, which means that anyone can read the research article for free, they must pay an article processing fee that can be in excess of $10,000. This money, of course, comes from research grants which are either derived from taxes or philanthropic donations.
Beyond the economic exploitation by big journals, whose existence is propped up by the fact that researchers themselves are so hopelessly thirsty for prestige above all else, the quality control mechanisms that decide what does and does not get published are not only ineffective but actively detrimental to research. Those outside of the academe know to revere the authority of a peer-reviewed study, but they may not know what peer review actually entails. The dirty secret is that peer review rarely prevents junk science from being published as long as the editors of the journal in question find the results sufficiently palatable (politically or otherwise) and exciting.
For those who care about this stuff, this is old news. Michael Eisen was writing about how bad the traditional journal model is for science and society decades ago. His 2013 essay is a great primer on the problem. More recently, a scolding critique of peer review published a few years ago by
has been doing the rounds again, which I would also recommend as further reading.Disgruntlement is certainly growing among scientists (in no small part due to the crusade of the Michael Eisens of the world). The issue has given rise to many attempts at new publishing models over the years, including Eisen’s own PLOS, as well as eLife, and OpenRχiv (for biomedical sciences) and arχiv (for other sciences).
Indeed, the NIH—the primary funder for biomedical sciences research in the US—has recently implemented a new public access policy which instructs all scientists who publish NIH-funded research to make it immediately available to the public for free via PubMed Central. We’ll see if this policy is sufficient to get scientists to forgo the prestige of big journals, or whether it will just increase the rate at which scientists dish out NIH dollar for the huge open access article processing fees (I suspect the latter … big journals don’t really seem to be that worried either).
Sadly, none of the solutions so far have been particularly persuasive to scientists—primarily due to the whole prestige thing. In part 2, which I will publish next week, I will offer my own solution which I think overcomes the issues outlined here in part 1 by not only acknowledging prestige as the motivator for academics, but by actually using it. My solution also capitalises on new forms of technology that we already use in other realms to disseminate information to those who want to read it most.
For now, I wish to flesh out the issues in detail here in a way that strikes a cord with academics and non-academics alike. To do this, I will simply recount the process of publishing a paper in an academic journal. By the end of this essay you will hopefully have a grasp of the scale and the importance of the issue.
Let’s imagine a scenario where some scientists have uncovered something very exciting and wish to share the results with other academics.
Submission
The first step is to write a manuscript broadly in the format of a specific target journal. Academics will target a journal whose scope covers their research area, but also whose impact factor is as high as possible.
Impact factor, for those who are unaware, is a metric used to assess journal prestige based on how often the articles published in a particular journal are cited by other subsequent research papers. Since high impact factor journals are more widely read (e.g. Nature, Science, Cell), publishing here maximises the reach of your research and also boosts your prestige as a scientist. Impact factor is a self-reinforcing phenomenon, since high impact journals attract high impact research, and high impact research makes the journal it is published in “high impact”. This self-reinforcement means that it is *very* difficult for a new journal to compete with established journals for prestige.
Once the results are written up, the corresponding author (typically the principle investigator/director of the lab where most of the work was carried out) will write a cover letter to the editor of the journal and submit it with the manuscript. After a week or so, an editor will have skimmed the paper and assessed whether it is good enough to go for peer review or not. If not, then you will have to submit the article to a journal with a lower impact factor (which will be easier to publish in). This can involve reformatting, restructuring, or rewriting the manuscript altogether if the new target journal has different rules or scope.
Since all the power here sits with the editors, there is a strong incentive for scientists to form good relationships with editors. The “best” scientists make a beeline for editors who attend conferences (indeed, “editorial presence” is a key factor in deciding which conferences are worthwhile to attend). The “best” scientists let the children of editors do internships in their labs. The “best” scientists publicly agree with the political views of editors. It is worth nothing that full-time editors are, by definition, not actual scientists themselves.
As an aside, this is also the point at which science can be politicised. Editors can just reject a paper if they don’t like what the results mean politically. This is something that
and Anton van de Merwe realised when they were trying to publish their comprehensive review of the evidence that SARS-CoV-2 most likely leaked from a lab (a lab in Wuhan that was actively working on ways to make bat viruses compatible with human cells, by the way). In the end, they published a version in the Telegraph and on Matt’s Substack because editors of big journals simply didn’t like the implication. If you’re interested you can read it here.That this didn’t end up in a peer-reviewed journal then means that policy makers and pundits can claim that “the scientific consensus supports a zoonotic spillover event and not a lab leak conspiracy”. This is true, but the reason why it’s true isn’t what the public thinks. There are, of course, many such cases. The political preferences of journal editors is one of the main reasons for prolific publication bias in fields like climate science, economics, and psychology, which skews what politicians and the public think is true.
Peer review
If initial submission is successful (perhaps the editor who flicked through your manuscript was in a particularly good mood when they did so), you then wait as your paper is peer reviewed. This means that the editor will pick experts in the relevant field(s) to send the paper to.
While some journals offer double-blind peer review, most of the time the reviewers can see the identity of the authors, which inevitably biases their reading of the paper—you can imagine reading a paper from a Nobel prize winner with slightly less cynicism than one from an unknown researcher from some unknown institution. This immediately lets credentialism seep into the peer review process, where research isn’t assessed purely on merit but on reputation of the researcher/institution.
Another flaw in the peer review system is the suggestion or barring of certain reviewers. Most journals ask scientists at initial submission to suggest or prohibit specific researchers from reviewing their paper. This is supposed to help prevent “professional conflicts of interest” from disrupting fair peer review, but in reality creates a massive loophole that can be exploited to cheat the peer review system. Scientists form clades of friendly peers who tacitly agree to leniently review each others’ papers, while always excluding critical scientists from reviewing their work. If a scientist has previously pointed out flaws in your research or historically disagrees with your interpretations of data, no need to worry—you can just make sure they never review your work by making sure to remember to exclude them when you write your cover letters.
I was given (genuinely) good advice during my PhD to always suggest older reviewers. Young scientists tend to want to prove themselves and will typically read your manuscript with a keener eye for detail and critique your work more thoroughly—which you don’t want of course!
Once the paper has reached its reviewers, they will be given a loose deadline by the editor to return their assessment of the paper. Importantly, this critical peer review step—the poster child of academic publishing—is simply done for free by other scientists, not by the journal itself. Of course, as I’ve already mentioned, scientists are primarily motivated by prestige and kudos, rather than money, but reviewers aren’t rewarded with that either. Some journals now publicise who the reviewers were post-publication, but it’s hardly as lauded as it could be. In fact, the reward for reviewing papers is so low that many principal investigators just ask their students or postdocs to do it for them (it’s called “mentorship” apparently).
Further, since the reviewers who might find the research particularly controversial—and therefore might expeditiously add their critical input—were excluded from review by the authors, the intrinsic interest in reviewing the paper is automatically dampened.
The result of very low motivation to review papers quickly is that the whole process is very drawn out. As a concrete example, I recently waited 99 days to get comments back (meaning a ~500 word response) from two reviewers for a paper from my PhD. I waited 99 days for about 2 hours of work! (NB: I’m not blaming the reviewers here at all—they have far better things to be doing).
To be fair, most journals, like those in the Nature, Science, or CellPress families, boast median turnarounds for first round of peer review of around 60 days.
Wait… “first round”?! Yes, we’re still quite far from the finish line.
Revisions
Once the editor has collated and assessed all of the reviewers’ comments, they then make a decision. In very rare circumstances, reviewers unanimously recommend a paper be published as it is, or with minor textual changes. If a single reviewer (there can be up to four, but at least two) has an issue with anything though, the editor will ask you to amend and resubmit the paper (or present strong arguments against the reviewer).
Peer review is thought, by the lay public, to be a stringent quality control mechanism for the validity of scientific research. If this were genuinely the case, journals would ask other researchers to replicate the experiments in the paper. This would be impossible without much more money and time though, and so experimental validity isn’t actually assessed by peer review, leading to many cases of invalid or fake data getting published in peer reviewed journals.
What the peer reviewers actually do is try to inflict their preferences on the science being done. They do this by insisting on discussion of certain topics, including citing their own work, or by requesting specific experiments be done that they personally would find interesting.
Now, of course more experiments will always make a paper better, so people defend the ability of peer review to “improve science”. But this line of reasoning obfuscates the intended role of peer review. The end result is that peer review makes papers larger, can make them semantically inconsistent (by requesting an experiment that doesn’t particularly make sense in the context of the rest of the paper), and makes the publishing process incredibly drawn out.
Months go by as the scientists work to do extra experiments. The reviewers have no idea of the capacity of the lab or the institute, so they could ask for an experiment that requires equipment that just isn’t available without collaboration (which means even more time and bureaucracy).
Eventually, the scientists submit a revised manuscript. They will have addressed every single point in the reviewers’ comments—either refuting them in a plea to the editor (risky), or addressing them as discussion points, or by adding more data from revision experiments. The editor receives this version and once more sends it out to the same reviewers. Perhaps one reviewer is at a conference, or on annual leave, or is busy submitting their own manuscript. More weeks flit by until the reviewers have time to take a look again.
Most of the time, they will be satisfied with what the scientists have done in the lab at their behest, and might only ask for some amendments to the text at this point. Sometimes, the experiments that the reviewer suggested might turn out to have been bad ones to suggest (oopsies!) and they might suggest new ones now. Many more months can go by before all reviewers and the editor is happy to proceed.
Publication
Finally, after this little song and dance which does nothing to validate scientific integrity, the editor will accept the manuscript in principle. At this point, the journal starts to actually do something, and formats the article properly and goes over typos etc. Weeks later the authors are sent proofs of the final thing. They must approve these and then the article is published online.
It is at this final stage that the authors are also asked which license they wish for the paper to be published under. If they don’t have enough funding to pay for the huge article processing charges for open access, then they have to just face the fact that fewer colleagues will be able to read the paper.
You might have picked up that most of the work here is done by (1) the authors and (2) the reviewers. And yet the journals are the ones raking in the profit. I think Michael Eisen puts it best:
I want you to note just how little the journal actually does here.
They didn’t come up with the idea. They didn’t provide the grant. They didn’t do the research. They didn’t write the paper. They didn’t review it. All they did was provide the infrastructure for peer review, oversee the process, and prepare the paper for publication. This is a tangible, albeit minor, contribution, that pales in comparison to the labors of the scientists involved and the support from the funders and sponsors of the research.
And yet, for this modest at best role in producing the finished work, publishers are rewarded with ownership of – in the form of copyright – and complete control over the finished, published work, which they turn around and lease back to the same institutions and agencies that sponsored the research in the first place. Thus not only has the scientific community provided all the meaningful intellectual effort and labor to the endeavor, they’re also fully funding the process.
In conclusion, mainstream academic publishing as it currently stands is slow, corrupt, frustrating, costly, produces needlessly long and convoluted papers, biases published results through the (political) lens of the editor, and doesn’t even give access to the research to the public.
This obvious farce probably explains why repositories like BioRχiv have surged in popularity. To compare the processes, this is what publishing via BioRχiv looks like:
Write your article with whatever formatting you desire and convert it to a PDF
Upload said PDF and any accompanying figures or supplementary files
Wait less than 24 hours for some straightforward vetting
View and share the DOI with whomever you want
Of course, pre-print servers like BioRχiv don’t have the prestige of a Nature paper, and the lack of some kind of critique or peer review means people who cannot independently evaluate research don’t know what to think. This means pre-prints primarily act as a placeholder, and people still hold out for the “actual publications”. Still, these repositories are particularly valuable for increasing the expediency of science and removing politicisation. I can only imagine what kind of hellish peer review process the authors of this Yale study are going through.
It may seem like the entire scientific enterprise is fucked, but I think there is hope. In part 2 of “Fixing academic publishing”, I will discuss further the attempts of others to fix the problem, explain why they too have fallen short, before I give my own novel solution which I think would solve most, if not all, of the issues with academic publishing.



I read "The Trouble with Medical Journals" by Richard Smith shortly after it was published (2006) and find most of his commentary still relevant today!