Another Peer Review Week is upon us, and this year’s theme, “Trust in Peer Review,” comes at a particularly apt time. The COVID-19 pandemic and other recent global crises have magnified the role of peer review as a mechanism to ensure research quality while, in many ways, putting trust in peer review to the test. Pressures on journal editorial teams and volunteer referees to peer review larger volumes of papers at record speeds have raised concerns in and outside of academia about the potential for higher levels of human error in an inherently imperfect process.
The pandemic aside, since peer review became part of scholarly publishing (a mere ~60 years ago in the 350+ year history of journals!), concerns about flawed peer review processes, overburdened reviewers, and reviewer biases have put peer review on shaky ground. Studies in the early 2010s revealing the ongoing replication crisis in many disciplines marked a particularly apprehensive time in peer reviews’ history. However, despite some discontent with the current state of peer review, surveys conducted in recent years, including Publons’ 2018 “Global State of Peer Review“ report, have revealed that most researchers want to work to improve peer review rather than replace or bypass it. Across academia, stakeholders in scholarly journal publishing have responded by placing greater emphasis on vetting and improving peer review practices, as exhibited by last year’s Peer Review Week, themed “Quality in Peer Review.” As stakeholders work to fortify peer review, the question now is — what can be done to build up and, in many cases, rebuild trust in the process?
With the curtain pulled back on peer reviews’ bumps and bruises for all to see, arguably, one of the best courses of action to address its limitations is to readily acknowledge them and work to make peer review practices more transparent. In this post, we round up five ways to increase peer review transparency that could help foster greater trust in the process.
One of the primary concerns about the efficacy of peer review expressed in recent years is that the predominant peer review model — where scholars design studies, conduct them, write reports, and then receive feedback — may be contributing to the reproducibility crisis by hindering the publication of null and negative results. During a keynote address for the 2019 International Society of Managing and Technical Editors Conference (ISMTE), Brian Nosek, Executive Director for the Center for Open Science, discussed how in the “publish or perish” research culture dominated by the Journal Impact Factor and other citation-based incentive systems, tidily packaged positive research outcomes are often favored by journals over negative or inconclusive ones. As a result, scholars may be compelled to, intentionally or not, introduce research spin into their work.
To address concerns about the potential for research spin and biases against null and negative results, the Registered Reports publishing format has been gaining ground in recent years. In the Registered Reports format, peer review is split into two parts:
- An initial peer review of the study concept and design, used to make an acceptance or rejection determination
- Peer review of the finished paper — this review stage is solely to assess the quality of the research contents, not the nature of the findings
Notably, the journal Royal Society Open Science recently initiated an expedited pre-registration process for coronavirus-related submissions to help scholars avoid following false leads. Journals interested in implementing the Registered Reports publishing format can find many helpful resources on The Center for Open Science website.
Once studies have been completed and peer review is underway, another trust concern many have expressed is a lack of transparency around the robustness of journal peer review, the rationale behind publication decisions, and the identities of those doing the reviewing. Journals can address high-level uncertainties about the nature of their peer review processes by providing detailed peer review policies on their publication websites. As for making the recommendations and/or identities of peer reviewers more transparent, some journals are now experimenting with more open peer review practices. Definitions and levels of open peer review can vary substantially, but the term generally refers to one or a combination of the following practices:
- Publishing review reports alongside journal articles to make reviewer recommendations open
- Making author and reviewer identities open to both parties and readers
- Soliciting public peer review comments in addition to or in lieu of invited reviews
Proponents of open peer review practices argue that they promote accountability and allow for reviewer recognition. Of course there are also arguments on the opposite side of the spectrum in favor of blinded peer review to prevent skewed referee reports. For example, some argue when reviewers know the identities of authors, there is the potential for them to make recommendations based on implicit biases. As exhibited by the various definitions of open peer review, there is no one-size-fits-all approach. So this is an area where journals can and likely should weigh the pros and cons of different models as they apply to their discipline and particular publication to determine the best course of action for them.
Another fundamental aspect of increasing peer review transparency and ultimately trust in the process is developing more universal peer-review standards and nomenclature (to help with hurdles in communicating peer review policies like the many definitions of “open peer review”). STM recently launched a “Working Group on Peer Review Taxonomy” to address the definitional component of peer review standards that journal publishers should keep on their radar, the first draft of which is available here. Another recent example of developing shared publishing definitions that can help increase research transparency and, consequently, trust in peer review is the CRediT – Contributor Roles Taxonomy initiative, which introduces naming conventions for different types of author roles.
In addition to working to develop shared peer review taxonomies, scholarly publishing stakeholders have been exploring new methods of normalizing and expressing journal publishing standards, including via the TOP Factor. Launched by the Center for Open Science in February 2020, TOP Factor is a new journal assessment system based on the Transparency and Openness Promotion (TOP) Guidelines, which consists of eight publishing standards to improve research transparency and reproducibility. TOP Factor has the potential to foster a uniform shared framework for implementing and demonstrating adherence to journal publishing best practices, which would increase peer review transparency and make it easier to compare publisher processes and norms. The TOP Guidelines also promote FAIR data principles and open data, which could help to foster a more self-correcting research environment.
Finally, a mounting trust concern to be addressed is the increasing peer-review burden faced by many scholars. As the rate of articles published across disciplines continues to increase, many worry that reviewers will struggle to keep up, potentially leading to more research errors slipping through the cracks. Adding another plot twist — in reality, a lot of the review work placed on scholars is redundant because they are being asked to vet papers that have already undergone peer review elsewhere.
One possible solution to help alleviate the peer review pressures scholars face is the Manuscript Exchange Common Approach (MECA), an initiative launched by the National Information Standards Organization (NISO) in May 2018. MECA aims to develop a framework for transferring manuscripts and review reports between different peer review systems. According to the MECA website sharing review reports between journals could help save an estimated “15 million hours of researcher time [that] is wasted each year repeating reviews.” In the same vein, the C19 Rapid Review Initiative, a large-scale collaboration among 20 publishers to improve the efficiency of coronavirus-related research processing, is piloting a version of review report sharing by requiring reviewers to consent to have their identities and review reports shared among participating publishers’ and journals.
The fact that peer review is imperfect, like any human endeavor, is no secret or surprise. Working to make all aspects of the peer review process more transparent could help safeguard against inevitable human errors. At the same time, it could expand the potential for more widespread assessment of peer review approaches and outcomes, something that scholars argue is needed to improve and increase trust in peer review.