Let’s Fix Peer Review

Scientists are the smartest idiots I know. 

fullwidth.ba450976

If one explains the current system of peer review to a non-scientist, the response is typically, “that’s insane, I thought you guys were supposed to be smart”.

To recap:

When we apply for a grant or want to publish our science, we secretly get the work reviewed by our peers, some of which are competing with us for precious funding, or a bizarre version of fame. Under the veil of anonymity, a reviewer can write anything, included false statements, or incorrect statements to justify a decision. The decision is most often, “do not fund” or “reject”, even if the review is based off of inaccuracies, lack of expertise, or even blatant slander. There are no rules, there are no repercussions. There are few integrity guidelines, or oversight, nor rules of ethics in the review process for the most part. It can lead to internet trolling at a level of high art. In funding decisions, these mistakes can be missed by inattentive panels, but were definitely missed in the CIHR reform scheme before panels were re-introduced. We still have a problem of reviewers self-identifying expertise they simply do not have.

Scientists have to follow strict rules of ethics when submitting data, including conflicts of interest, research ethics, etc.  No such rules are often formally stated in the review process and can vary widely between journals.

This system is historic, back to an era when biomedical research was a fraction of the size it is today, and journal Editors were typically active scientists. The community was small. But as science rapidly expanded in the 90s, so did scientific publishing, and soon editors became professional editors, with some never running a lab or research program. Then, came the digital revolution, and journals were no longer being read on paper and the pipeline to publish increased exponentially.

What drove the massive expansion of journals? Money.  Big money. And like many historic industries, it’s thriving, mostly based off free slave labor.

CELL was sold it to Elsevier press in 1999. While the sales number was never formally revealed, it was rumored to exceed $US100M. No person who reviewed for this journal received a thin dime. The analogy would be hiring workers to build a road, pay them nothing, insist the road get paved in under 14 days, then charge them to use the road. Why? for the prestige of being associated with a road (which is fundamentally no different than any other road).

What CELL and Nature started, mushroomed wildly in the next 20 years, with journals starting up weekly, now numbering in the thousands, on a simple business model: hire Editors, accept submission, get three reviews, charge to publish, with numbers in the thousands of dollars per manuscript.  It’s a money making machine, based off free labor. Why not? Scientists are idiots, they work for free, they do hard work just based off ideology.

What happens under the veil of anonymity? Papers are trivially reviewed, either quickly dismissed or accepted without much scientific input, which has driven a lot of fraud or just bad science, that when revealed often leads to the question: how did the reviewers not see this? It gets worse, “knowledge leaders” in some fields can manipulate the process, hold up or block manuscripts while post-docs race to reproduce the data as their own, as outlined in a public letter to the Editors of Nature in 2010. Journals return comments to authors, but many take secret comments from reviewers directly to the Editors, without author knowledge. Why? Horror stories abound about revision after revision, then a final rejection after a year or more because the Editor lost interest. Meanwhile, careers and lives are stalled. This is very problematic when a field of research becomes dogmatic, and truly innovative theories or approaches are presented: to accept this work means having to remove dogma, and this can mean invalidation of “knowledge leaders” entire publication records.

These publications can set careers or lack of them can ruin careers and gain or lose funding. PDFs get hired by institutions that look like they can walk on water based on their CVs, only to drown in a few steps as an independent investigator.

Often overheard at symposium by senior scientists: ” we had a problem with reviewer #2, so I called the Editor and sorted it out”. Called? How? No journal lists phone numbers for Editors, what magic Rolodex does this involve?

We have a system in place that is used because it is historic. It’s not working, it’s not fair, it benefits fraud, and it’s bad for science. This failure needs to be addressed with a series of ethical guidelines and transparency, because the process has been corrupted and failure is now so common, there are entire websites dedicated to it. Suggestions:

  1. Editors need to be active scientists. The Journal of Biological Chemistry is an excellent example.
  2. Reviewers and academic editors need to be paid. The Public Library of Online Science (PLoS) sounds like an altruistic organization to disseminate scientific knowledge, but executive compensations can exceed $330,000-$540,000 a year. Clearly, PloS feels expertise and talent should be rewarded, which is fair, but not when it comes to reviewers who put in hours of work to review manuscripts. The same reviewers then have to pay $1500+ to publish, and the journal decided to just stop copy editing manuscripts, leading to sloppy publications. The line between legitimate journals and “predatory” journals is blurring. This is not unique to PloS. Scientific publishing is a massive profit business. The NY times revealed a “shocking” number of $500 for page charges in “predatory” journals. Yet, many established journals charge $500-600 per color figure alone. I cannot think of another profession that requires so many years of expertise under draconian standards that has so little value applied to our time. Try getting a free 3 hours from a lawyer, accountant, or consultant. Good luck with that, or look out for what you get.Dr._Nick_S28_billboard_gag
  3. Reviewers need to be scored. By both Editors and submitting authors. We recently were reviewed at a leading cell biology journal, and while the paper was not accepted for publication, we received deeply detailed, outstanding reviews from all three reviewers. Their intent was obvious: address these criticisms and this will be better work. We were also reviewed at two leading magazines recently, and what we got back were late reviews, of 5-7 lines or less with terms like “unconvincing”, or simply incorrect statements, with no chance to respond (at 18 years as PI, I have not received the magic Editorial Rolodex). Review without scientific justification. These scores should be tied to ORICID. Editors should be able to flag scientific misconduct to home institutions or funding agencies, for reviewers behaving inappropriately. Low scoring reviewers should be asked to justify this score to home institutions. Scoring then allows justification to pay good reviewers, and insist on sincere efforts to avoid trivial reviews.
  4. No more gatekeeping. Famous journals do not review most of their submissions, with most rejections coming from the desk of non-expert, non-scientist Editors looking for name and institutional recognition and trendy buzzwords. The issue is they simply have too many submissions if they are regarded as “high impact” journals. Yet, regardless of the science, they will not decline submission from high profile institutions, with fear the institution’s senior scientists will not submit. What is compounding this problem is that relative new or lesser known journals are doing this in an attempt to boost “impact” based on trendy subjects, which just demonstrates what their priorities are: not science, but gaming the impact factor metrics. This would require a new system, onto point 5….
  5. No more direct submissions. Manuscripts should be openly submitted to free access sites like BioRxiv, and go live in hours, and journal Editors can bid to authors to send to review in a clearing house type model. What this does is allow the Editors to judge impact based on comments from the community if they lack direct expertise. As it stands, the current process is stochastic and decision to review is based on often one opinion. It can now take months before a paper is even reviewed, as journals can sit on the decision to send to review up to a month or more (remember, they don’t have to follow any rules). It can take half a day at time just to submit a manuscript. The cycles of submission and editorial desk rejection can suck half a year out of the publication process -this does nothing for science.
  6. One manuscript and reference format. One journal format. Pick one, any one. the current need for software to deal with 1000s of journal reference styles for 1000s of  journals is asinine. It’s like trying to do science in 1000 different standards of measurement. We picked the metric system and moved on.
  7. Manuscript and funding agency reviews should be public, as this is publicly funded. This allows readers to know exactly how well a manuscript or grant was reviewed, and if a journals press hype matches actual scientific opinion, and if any obvious bias occurred in the review process. This would help with the media coverage of manuscripts as the journalists almost entirely rely on PR hype.
  8. All Reviews should be addressable by authors before decision. This is particularly a problem on grant panels that lack expertise, they can rank and score based on reviewer errors, but this cannot be addressed until the next competition. Same problem with rejection after first submission at journals. There should be a brief ability to respond to reviews before a decision is made. The current system relies on pure chance that our work is reviewed properly. We might as well have a lottery, especially in Canada where biomedical research grants can be reviewed, scored, and ranked  by non-scientists (seriously).
  9. Reviewers should discuss and unmask prior to decision, after reading the responses.  Currently some journals do unmask reviewers to each other and allow discussion (EMBO J., Current Biology, eLife…). Nothing is more discouraging than spending hours on review to improve a manuscript, only to have another reviewer dismiss the manuscript with an obvious minimal effort and comments like, “unconvincing” plus secret comments to the editor I cannot see. I don’t see the point to unblinding reviewers to authors, this will just discourage participation and fear of vindictive authors.
  10. Define Misconduct in the Scientific Review Process.  There needs to be repercussions for unethical activity.
  11. Have a Higher bar for Authorship. Many clinicians have networks that yield their names on hundreds of manuscripts, for zero effort on the actual work, and it’s very likely they never read the manuscripts. This is simply not ethical, and unfair to authors with real effort on manuscripts. This is a real problem in funding agencies that then use reviewers that count papers, coming to the conclusion a good scientist publishes seriously every two to three weeks of their lives.
  12. Keep individual manuscript metrics, ban journal impact metrics. Journal impact scores can be gamed, and are gamed, and make no sense. It’s like saying a single driver of a Honda is more intelligent, because on average, Honda drivers have a high IQ, and thus, driving a Honda makes you smarter.  Using metrics like impact factors or H-index to judge careers is lazy, incompetent administration. You drive a Honda? Hired! We denied tenure? Not my fault, he/she drove a Honda!
  13. Retracted manuscripts due to figure fraud should reveal who reviewed the manuscript. Maybe these guys will pay attention next time. Or, maybe if we paid them, this would happen a lot less. It’s very likely if we got to see the reviews of these manuscripts, we would see that they were trivially reviewed.
  14. Canada needs an Office of Research Integrity. For a variety of reasons, fraudsters can flourish in the Canadian system, as funding institutions defer fraud investigations to home institutions, who have a perverse incentive to bury any conclusions. The US has the ORI, independent of any institution, and in some countries, scientific fraud is legally regarded as fraud of the public trust and can result in civil action or jail time. If Canadian science wants serious public support, as the Naylor report recommends, it should come with equally serious scientific integrity.

3 thoughts on “Let’s Fix Peer Review

  1. Good Professor,

    Well done and you’ve outlined some sensible ways forward. However, I see some great impediments to your proposal that is also ingrained within the scientific community. That is the prestige that is still attached to the high-profile, high-impact, (high-profit!) journals by the community. This bias is particularly acute in evaluations for tenure, promotion to full professor and, indeed, at the time of being considered for an academic position in the first place. I think that the reviews one receives from Canadian granting agencies have no embarrassment mentioning the impact of the journals in which one has published, rather than the impact or significance of the idea.

    Indeed, in the short time frame that we assess the “output” of a researcher, the relative merits of any lab’s work seem vanishing small. Is this not contrary to the very argument that we are making when we make the case for more funding of basic research, the “payoff” for which is very long in the future? (do I hear a critique of neoliberal (business) processes infiltrating The Academy looming?) .

    Your proposal, therefore, seem particularly timely. The larger community is coming to terms with the apparent deficiencies and fragilities in our current system. An effort to promote a truly open and accountable system of review and publishing combined with open data should be had. It would entail the responsibility of the entire community to consider the merits of our peers who are seeking advancement. Some thought as to how to acknowledge such work is needed.

    Happy to engage further.

  2. Paul clearly drives a Honda.

    The problem with changing the system, is that some of the most successful in the current system of evaluation would not benefit, or even face a downturn in support in a more transparent system of review.
    We tried the business-driven publication model, and it’s not working. We tried anonymity in peer review, it worked out as well as anonymity on the internet.

    I think it may be time for the publicly funded agencies to insist on open publication in one digital format, with open pre-print servers as the first step, then peer review. If we all publish in one place, impact is a dead concept, unless we can define impact with individual publication metrics. In a non-profit model, reviewers could be paid, and clear rules of ethics and integrity can be established. The money driving publishing is all from the public trust.

    Sadly, I seriously doubt anything will change in the face of major research institutions and their long-standing success of gaming the current system. But, inaction is dangerous, as The Economist even points out in 2013 that government-funded research has a very low rate of actual translation to therapies, and private Pharma is retracting away from hard problems to more profitable lifestyle drugs.

    https://www.economist.com/news/leaders/21588069-scientific-research-has-changed-world-now-it-needs-change-itself-how-science-goes-wrong

    Keep rubbing our grey beards in our entitled ivory towers while distancing ourselves from society and the impact of good science, and eventually no one will care.

Leave a Reply to raytruantCancel reply