5 ways to increase peer review transparency to foster greater trust in the process

Another Peer Review Week is upon us, and this year’s theme, “Trust in Peer Review,” comes at a particularly apt time. The COVID-19 pandemic and other recent global crises have magnified the role of peer review as a mechanism to ensure research quality while, in many ways, putting trust in peer review to the test. Pressures on journal editorial teams and volunteer referees to peer review larger volumes of papers at record speeds have raised concerns in and outside of academia about the potential for higher levels of human error in an inherently imperfect process.

The pandemic aside, since peer review became part of scholarly publishing (a mere ~60 years ago in the 350+ year history of journals!), concerns about flawed peer review processes, overburdened reviewers, and reviewer biases have put peer review on shaky ground. Studies in the early 2010s revealing the ongoing replication crisis in many disciplines marked a particularly apprehensive time in peer reviews’ history. However, despite some discontent with the current state of peer review, surveys conducted in recent years, including Publons’ 2018 “Global State of Peer Review“ report, have revealed that most researchers want to work to improve peer review rather than replace or bypass it. Across academia, stakeholders in scholarly journal publishing have responded by placing greater emphasis on vetting and improving peer review practices, as exhibited by last year’s Peer Review Week, themed “Quality in Peer Review.” As stakeholders work to fortify peer review, the question now is — what can be done to build up and, in many cases, rebuild trust in the process?

With the curtain pulled back on peer reviews’ bumps and bruises for all to see, arguably, one of the best courses of action to address its limitations is to readily acknowledge them and work to make peer review practices more transparent. In this post, we round up five ways to increase peer review transparency that could help foster greater trust in the process.

Putting research questions and methods before findings

One of the primary concerns about the efficacy of peer review expressed in recent years is that the predominant peer review model — where scholars design studies, conduct them, write reports, and then receive feedback — may be contributing to the reproducibility crisis by hindering the publication of null and negative results. During a keynote address for the 2019 International Society of Managing and Technical Editors Conference (ISMTE), Brian Nosek, Executive Director for the Center for Open Science, discussed how in the “publish or perish” research culture dominated by the Journal Impact Factor and other citation-based incentive systems, tidily packaged positive research outcomes are often favored by journals over negative or inconclusive ones. As a result, scholars may be compelled to, intentionally or not, introduce research spin into their work.

To address concerns about the potential for research spin and biases against null and negative results, the Registered Reports publishing format has been gaining ground in recent years. In the Registered Reports format, peer review is split into two parts:

  1. An initial peer review of the study concept and design, used to make an acceptance or rejection determination
  2. Peer review of the finished paper — this review stage is solely to assess the quality of the research contents, not the nature of the findings

Notably, the journal Royal Society Open Science recently initiated an expedited pre-registration process for coronavirus-related submissions to help scholars avoid following false leads. Journals interested in implementing the Registered Reports publishing format can find many helpful resources on The Center for Open Science website.

Employing more open peer review practices

Once studies have been completed and peer review is underway, another trust concern many have expressed is a lack of transparency around the robustness of journal peer review, the rationale behind publication decisions, and the identities of those doing the reviewing. Journals can address high-level uncertainties about the nature of their peer review processes by providing detailed peer review policies on their publication websites. As for making the recommendations and/or identities of peer reviewers more transparent, some journals are now experimenting with more open peer review practices. Definitions and levels of open peer review can vary substantially, but the term generally refers to one or a combination of the following practices:

  • Publishing review reports alongside journal articles to make reviewer recommendations open
  • Making author and reviewer identities open to both parties and readers
  • Soliciting public peer review comments in addition to or in lieu of invited reviews

Proponents of open peer review practices argue that they promote accountability and allow for reviewer recognition. Of course there are also arguments on the opposite side of the spectrum in favor of blinded peer review to prevent skewed referee reports. For example, some argue when reviewers know the identities of authors, there is the potential for them to make recommendations based on implicit biases. As exhibited by the various definitions of open peer review, there is no one-size-fits-all approach. So this is an area where journals can and likely should weigh the pros and cons of different models as they apply to their discipline and particular publication to determine the best course of action for them.

Developing shared peer-review standards and taxonomies

Another fundamental aspect of increasing peer review transparency and ultimately trust in the process is developing more universal peer-review standards and nomenclature (to help with hurdles in communicating peer review policies like the many definitions of “open peer review”). STM recently launched a “Working Group on Peer Review Taxonomy” to address the definitional component of peer review standards that journal publishers should keep on their radar, the first draft of which is available here. Another recent example of developing shared publishing definitions that can help increase research transparency and, consequently, trust in peer review is the CRediT – Contributor Roles Taxonomy initiative, which introduces naming conventions for different types of author roles.

In addition to working to develop shared peer review taxonomies, scholarly publishing stakeholders have been exploring new methods of normalizing and expressing journal publishing standards, including via the TOP Factor. Launched by the Center for Open Science in February 2020, TOP Factor is a new journal assessment system based on the Transparency and Openness Promotion (TOP) Guidelines, which consists of eight publishing standards to improve research transparency and reproducibility. TOP Factor has the potential to foster a uniform shared framework for implementing and demonstrating adherence to journal publishing best practices, which would increase peer review transparency and make it easier to compare publisher processes and norms. The TOP Guidelines also promote FAIR data principles and open data, which could help to foster a more self-correcting research environment.

Facilitating the sharing of review reports across journals

Finally, a mounting trust concern to be addressed is the increasing peer-review burden faced by many scholars. As the rate of articles published across disciplines continues to increase, many worry that reviewers will struggle to keep up, potentially leading to more research errors slipping through the cracks. Adding another plot twist — in reality, a lot of the review work placed on scholars is redundant because they are being asked to vet papers that have already undergone peer review elsewhere.

One possible solution to help alleviate the peer review pressures scholars face is the Manuscript Exchange Common Approach (MECA), an initiative launched by the National Information Standards Organization (NISO) in May 2018. MECA aims to develop a framework for transferring manuscripts and review reports between different peer review systems. According to the MECA website sharing review reports between journals could help save an estimated “15 million hours of researcher time [that] is wasted each year repeating reviews.” In the same vein, the C19 Rapid Review Initiative, a large-scale collaboration among 20 publishers to improve the efficiency of coronavirus-related research processing, is piloting a version of review report sharing by requiring reviewers to consent to have their identities and review reports shared among participating publishers’ and journals.

Putting it all together

The fact that peer review is imperfect, like any human endeavor, is no secret or surprise. Working to make all aspects of the peer review process more transparent could help safeguard against inevitable human errors. At the same time, it could expand the potential for more widespread assessment of peer review approaches and outcomes, something that scholars argue is needed to improve and increase trust in peer review.

Source: https://blog.scholasticahq.com/


JRTDD has been indexed into DOAJ

Dear readers,

It my pleasure to announce you that Journal for ReAttach Therapy and Developmental Diversities has been indexed into DOAJ. It is a great achievement of editorial office which worked very hard in last months.

What is DOAJ (Directory of Open Access Journals)?

DOAJ is a community-curated online directory that indexes and provides access to high quality, open access, peer-reviewed journals. DOAJ is independent. All funding is via donations, 18% of which comes from sponsors and 82% from members and publisher members. All DOAJ services are free of charge including being indexed in DOAJ. All data is freely available.

DOAJ operates an education and outreach program across the globe, focussing on improving the quality of applications submitted.

JRTDD Editor-in-chief


Guidelines for Book Review

The Journal for ReAttach Therapy and Developmental Diversities (JRTDD) will publish book reviews on major books across fields and sub- fields covered by the journal.

  • If you are interested in reviewing a book for the Journal, please send an expression of interest to the Editor-in-chief on journaljrtdd@gmail.com.
  • Authors and publishers should also contact the Editor if they would like their books to be reviewed in the journal.

The JRTDD seeks reviews that assess a book’s strengths and weaknesses and locate it within the current field of scholarship. A review should not simply be a listing of contents, though its overall organization and emphasis are up to the individual reviewer. Reviewers should avoid lists of minor imperfections (e.g. misplaced commas) but should not hesitate to draw attention to serious editorial problems and errors of fact or interpretation. It is also helpful if reviewers indicate for which audiences and libraries the book seems appropriate. The Book Review Editors reserve the right to edit for content and length. In summary, book reviews should be timely and objective and should consider the following:

  • The intended audience for the book and who would find it useful
  • The main ideas and major objectives of the book and how effectively these are accomplished
  • The soundness of methods and information sources used
  • The context or impetus for the book – political controversy, review research or policy, etc
  • Constructive comments about the strength and weaknesses of the book

Book reviews for the Journal for ReAttach Therapy and Developmental Diversities should be should be double spaced, using a standard sized font Times New Roman. All material, including long quotes, should be double-spaced. Please use minimal style formatting in your document. Reviews of single books should be between 750-1000 words.

The review should begin with a hanging indent paragraph for each book in the review that includes its title, subtitle, author/s or (editor/s), and publishing information in the following format:

All subsequent paragraphs should be indented. There are no footnotes in the Journal for ReAttach Therapy and Developmental Diversities book reviews. References to texts not under review should be parenthetical only. Also, please type your name and institution, exactly as you wish it to be published, at the end of the review. For example: Frederik Johnson, University of Amsterdam, The Netherlands.

The following information should be given about the book being reviewed at the start of each review:

  • Author/Editor Name, Book Title, Publisher, Year of Publication, ISBN: 000-0-00-000000-0, Number of Pages, Price

What is Open Peer Review?

Peer review is a way by which manuscripts can be assessed for their quality. Reviewers scrutinize the draft of a journal article, grant application, or other work and provide feedback to the author(s) for improving the text. Reviewers don’t just read the text, but also evaluate whether the research presented is sound, whether the methods used in the research are in keeping with basic scientific protocols, and whether the analysis of the results is valid. In addition, reviewers need to determine whether the subject matter (e.g., a research study) is in keeping with the journal’s particular field of study and is a novel scientific concept to warrant publishing it.

Even with the system in place, there are conflicting views about peer review and their merits. Most researchers believe that the current peer review system is lacking. However, they also agree that peer reviews are valuable in helping improve their papers, which help get them published. Still, a 2008 study revealed approximately one-third of those asked thought that the system could be improved.

Reviewer Anonymity

As mentioned, peer reviewers also assess grant applications and this process has its flaws. In an article in Times Higher Education, the author describes the grant review process as one of stifling innovative research and paring down each application into a numerical score. Funding sources presume that the reviewers are unbiased and knowledgeable. The funding agencies often base their decisions on these scores without having ever read the actual research proposal. The suggestion of having reviewers reveal their identities would remove any doubt of reviewer qualifications and hopes to hold the reviewers more accountable for these types of scoring protocols.

In an article published on F1000 Research, an open-research publishing platform, author Tony Ross-Hellauer describes some of the criticisms of the anonymous peer review as follows:

  • Unreliable and inconsistent: Reviewers rarely agree with one another. Decisions to reject or accept a paper are not consistent. Papers have been known to be published but then rejected when resubmitted to the same journal later.
  • Publication delays and costs: At times, the traditional peer review process can delay publication of an article for a year or more. When “time is money” and research opportunities must be taken advantage of, this delay can be a huge cost to the researcher.
  • No accountability: Because of anonymity, reviewers can be biased, have conflicts of interest, or purposely give a bad review (or even a stellar review) because of some personal agenda.
  • Biases: Although they should remain impartial, reviewers have biases based on sex, nationality, language or other characteristics of the author(s). They can also be biased against the study subject or new methods of research.
  • No incentives: In most countries, reviewers volunteer their time. Some feel that this is part of their job as a scholar; however, others might feel unappreciated for their time and talent. This might have an impact on the reviewer’s incentive to perform.
  • Wasted information: Discussions between editors and reviewers or between reviewers and authors are often valuable information for younger researchers. It can help provide them with guidelines for the publishing process. Unfortunately, this information is never passed on.


Because of these obvious dissatisfactions with the peer review process, a change to “open peer review” or “OPR” has been suggested. The premise is that an open peer review process would avoid many of the issues listed above.

What Is OPR?

OPR was first considered about 30 years ago but became more popular in the 1990s. Originally defined only as revealing a reviewer’s identity, it has now expanded to include other innovations. Although suggested as a means by which to help streamline the process and ensure honest reviews, the actual definition OPR has eluded those in the research and publishing fields. A plethora of different meanings prompted a study on the accepted definitions of the term.

In an article published on F1000 Research, author Tony Ross-Hellauer delves into the several different definitions of OPR and created a “corpus of 122 definitions” for the study. Ross-Hellauer reminds us that there is yet no standard definition of OPR. On the contrary, many definitions overlap and even contradict each other as follows:

  • Identities of both author and reviewer are disclosed,
  • Reviewer reports are published alongside articles,
  • Both of these conditions,
  • Not just invited experts are able to comment, and/or
  • Combinations of these and other new methods.


These definitions are very open ended and those discussing OPR use one, some, or all of these in combination.

The Study on OPR

Ross-Hellauer reviewed the literature (e.g., Web of Science, PubMed, Google Scholar, and BioMed) for articles that mentioned “open review” or OPR and found 122 definitions of the term! The author then reviewed and classified all 122 definitions according to a set of traits that were new to the traditional peer review process. He concluded by defining seven traits of OPR and offered these as a basis for the definition of OPR as follows:

  • Open identities: Authors’ and reviewers’ identities revealed.
  • Open reports: Reviews published with the article.
  • Open participation: Readers able to contribute to review process.
  • Open interaction: Reciprocal discussions between parties.
  • Open pre-review manuscripts: Manuscripts immediately available before formal peer review.
  • Open final-version commenting: Review or commenting on final “version of record” publications.
  • Open platforms: Review facilitated entity other than the venue of publication.


It appears from various studies that OPR is a valuable revision to the old peer review process and its anonymity. Although most agree that peer review has always been valuable, it is not without its faults. There is hope that the OPR system will eliminate much of these criticisms.

Resource: https://www.enago.com/