Why having your journals indexed in Google Scholar matters more than ever and steps to get started

If you ask any researcher which online outlets they use to find relevant journal articles, there’s a good chance that Google Scholar will be at the top of their list. The 2018 “How Readers Discover Content in Scholarly Publications“ report found that researchers rated academic search engines as “the most important discovery resource when searching for journal articles,” and Google Scholar is among the most widely used free academic search engines available. A 2015 survey on 101 Innovations in Scholarly Communication also found that 92% of academics surveyed used Google Scholar.

With so many researchers using Google Scholar, it’s a search engine that all journal publishers should prioritize. Google Scholar stands apart as one of the most accessible and sophisticated academic search engines available. Inclusion in Google Scholar can help expand the accessibility, reach, and, consequently, the impacts of the articles you publish.

Despite the seemingly magical ability of Google to answer any search query with endless results, it’s important for publishers to know that the search engine can only index content its crawlers are able to find (more on crawlers below!). Google Scholar also has specific inclusion criteria. If you want all of your journal articles to be added to Google Scholar, you must take steps to ensure that they can be found by the search engine and that Google Scholar recognizes your journal website as a legitimate source.

In this blog post, we overview how Google Scholar works, the benefits of Google Scholar indexing, and what you need to know to have your journal articles added to Google Scholar. Let’s get started!

What is Google Scholar exactly and how does it work?

Since you’re reading this blog post, you likely know about Google Scholar as an academic search tool. But you may not be entirely sure of how Google Scholar processes content or how it compares to Google’s general search engine. Before we get into the specific benefits of Google Scholar and its inclusion requirements, let’s first take a look at what Google Scholar is exactly and how it works.

Like Google, Google Scholar is a crawler-based search engine. Crawler-based search engines are able to index machine-readable metadata or full-text files automatically using “web crawlers,” also known as “spiders” or “bots,” which are automated internet programs that systematically “crawl” websites to identify and ingest new content.

Google Scholar has access to all of the crawlable scholarly content published on the web, with the ability to index entire publisher and journal websites as well as the ability to use the citations in the articles it has indexed to find other related content. Google Scholar includes content across academic disciplines, from all countries, and in all languages. Recent research, including Michael Gusenbauer’s article “Google Scholar to overshadow them all? Comparing the sizes of 12 academic search engines and bibliographic databases,” has found that Google Scholar is the world’s largest academic search engine, containing over 380 million records.

A common misconception about Google Scholar is that it indexes all of the content it has access to regardless of the content type or quality. This is not the case. Rather, as explained in “Academic Search Engine Optimization (ASEO): Optimizing Scholarly Literature for Google Scholar & Co.,” Google Scholar is an “invitation based search engine.” This means that “only articles from trusted sources and articles that are ‘invited’ (cited) by articles already indexed are included in the database.” On its website Google Scholar states, “we work with publishers of scholarly information to index peer-reviewed papers, theses, preprints, abstracts, and technical reports from all disciplines of research and make them searchable on Google and Google Scholar.”

In order for your journals to be considered for inclusion in Google Scholar, the content on your website must first meet two basic criteria:

  1. Consist primarily of journal articles (e.g. original research articles, technical reports)
  2. Make freely available either the full-text or the complete author-written abstract for all articles (without requiring human or search engine robot readers to log into your site, install specific software, accept any disclaimers etc.)

From there your journal website and articles will have to meet certain technical specifications, which we outline below. Before we get into that, let’s first take a look at some of the specific benefits Google Scholar offers journals and how to tell if your articles are being included in the search engine.

Why should I get my journals indexed in Google Scholar?

We’ve talked about the broad research benefits of Google Scholar, but you may be wondering — what are the specific benefits of Google Scholar indexing for the journals I publish? Google Scholar indexing can greatly expand the reach of your journal articles and improve the chances of your articles being read, shared, and cited online. A primary benefit of Google Scholar is that, unlike other databases, its search functionality focuses on individual articles, not entire journals. So having your articles indexed in Google Scholar can help more scholars discover the journals you publish when those articles show up in keyword and key phrase searches.

Getting your journal articles indexed in Google Scholar will:

  • Increase the reach of your individual journal articles because more scholars will be likely to find them
  • Give scholars an easy way to gauge how relevant your articles are to their research based on the article title and search snippet you provide
  • Help resurface old articles from the journals you publish — Google Scholar takes citations into account and shows more frequently cited works earlier in search results

For open access journals the importance of Google Scholar indexing is even greater. If you want your content to be accessible, making it freely available isn’t enough — you have to be sure that anyone can find your journal articles on the web and that they aren’t only available to scholars with access to subscription-based academic abstracting and indexing databases or prior knowledge of your journals (i.e. scholar knows to search for your specific journal website). Google Scholar makes it possible for anyone to freely search for and find relevant scholarly content on the web from anywhere in the world.

How can I tell if my journal is being indexed by Google Scholar?

As noted, Google Scholar doesn’t just index all of the content it can access on the web. Rather, it seeks to index content from what it deems to be “trusted” publication websites. If other articles from trusted websites have cited a journal article Google Scholar will know to index it, but any content that is not published on a “trusted” website and that has not been cited by an article already included in Google Scholar will not be indexed right away.

In order for Google Scholar to deem a journal website trustworthy, it must follow all of Google Scholar’s technical guidelines. Journal publishers should also contact Google Scholar to request inclusion in the index. If you’re not sure whether your journals are being indexed by Google Scholar, you can quickly check by searching your journal website domain (e.g. www.examplejournal.com) in scholar.google.com.

What steps can I take to get my journals indexed by Google Scholar?

If you find that one or more of the journals you publish are not yet being indexed by Google Scholar you’ll need to take some steps to get them added to the search engine.

Google Scholar has thorough Inclusion Guidelines for Webmasters that detail how to get your articles added to the index.

Some steps you may need to take include:

  • Checking your HTML or PDF file formats to make sure the text is searchable
  • Configuring your website to export bibliographic data in HTML meta tags
  • Publishing all articles on separate webpages (i.e. each article should have its own URL)
  • Making sure that your journal websites are available to both users and crawlers at all times
  • Making sure you have a browse interface that can be crawled by Google’s robots
  • Placing each article and each abstract in a separate HTML or PDF file (Google Scholar will not index multiple articles in the same PDF)

Google Scholar’s indexing guidelines can get pretty technical. If your journal or journals are currently hosted on a standalone website that you had custom-built or that you’re hosting via an outside provider like WordPress, you’ll need to either work with available internal IT resources to make any necessary updates or hire a web developer.

If you don’t want to deal with the technical aspects of getting your journal articles indexed in Google Scholar, you may want to consider moving your journal to a website hosted on a journal publishing platform that can take care of Google Scholar indexing for you. For example, Scholastica is already recognized as a trusted site by Google Scholar so all journals that publish via Scholastica journal websites are automatically indexed with no extra work on the part of the editors. Some journal databases, such as JSTOR or Project Muse, are also indexed by Google Scholar. So if you publish via a Google Scholar indexed aggregator or database, or if you regularly upload articles to one, you may also be able to have articles added to Google Scholar through it. You’ll want to check with any journal hosting platform or aggregator to make sure that they support indexing in Google Scholar.

However you decide to go about getting your journal articles indexed by Google Scholar, now’s the time to start! Google Scholar indexing is sure to expand the accessibility and reach of the articles you publish.

Source: https://blog.scholasticahq.com/

Linkedin   
Whatsapp   

5 ways to increase peer review transparency to foster greater trust in the process

Another Peer Review Week is upon us, and this year’s theme, “Trust in Peer Review,” comes at a particularly apt time. The COVID-19 pandemic and other recent global crises have magnified the role of peer review as a mechanism to ensure research quality while, in many ways, putting trust in peer review to the test. Pressures on journal editorial teams and volunteer referees to peer review larger volumes of papers at record speeds have raised concerns in and outside of academia about the potential for higher levels of human error in an inherently imperfect process.

The pandemic aside, since peer review became part of scholarly publishing (a mere ~60 years ago in the 350+ year history of journals!), concerns about flawed peer review processes, overburdened reviewers, and reviewer biases have put peer review on shaky ground. Studies in the early 2010s revealing the ongoing replication crisis in many disciplines marked a particularly apprehensive time in peer reviews’ history. However, despite some discontent with the current state of peer review, surveys conducted in recent years, including Publons’ 2018 “Global State of Peer Review“ report, have revealed that most researchers want to work to improve peer review rather than replace or bypass it. Across academia, stakeholders in scholarly journal publishing have responded by placing greater emphasis on vetting and improving peer review practices, as exhibited by last year’s Peer Review Week, themed “Quality in Peer Review.” As stakeholders work to fortify peer review, the question now is — what can be done to build up and, in many cases, rebuild trust in the process?

With the curtain pulled back on peer reviews’ bumps and bruises for all to see, arguably, one of the best courses of action to address its limitations is to readily acknowledge them and work to make peer review practices more transparent. In this post, we round up five ways to increase peer review transparency that could help foster greater trust in the process.

Putting research questions and methods before findings

One of the primary concerns about the efficacy of peer review expressed in recent years is that the predominant peer review model — where scholars design studies, conduct them, write reports, and then receive feedback — may be contributing to the reproducibility crisis by hindering the publication of null and negative results. During a keynote address for the 2019 International Society of Managing and Technical Editors Conference (ISMTE), Brian Nosek, Executive Director for the Center for Open Science, discussed how in the “publish or perish” research culture dominated by the Journal Impact Factor and other citation-based incentive systems, tidily packaged positive research outcomes are often favored by journals over negative or inconclusive ones. As a result, scholars may be compelled to, intentionally or not, introduce research spin into their work.

To address concerns about the potential for research spin and biases against null and negative results, the Registered Reports publishing format has been gaining ground in recent years. In the Registered Reports format, peer review is split into two parts:

  1. An initial peer review of the study concept and design, used to make an acceptance or rejection determination
  2. Peer review of the finished paper — this review stage is solely to assess the quality of the research contents, not the nature of the findings

Notably, the journal Royal Society Open Science recently initiated an expedited pre-registration process for coronavirus-related submissions to help scholars avoid following false leads. Journals interested in implementing the Registered Reports publishing format can find many helpful resources on The Center for Open Science website.

Employing more open peer review practices

Once studies have been completed and peer review is underway, another trust concern many have expressed is a lack of transparency around the robustness of journal peer review, the rationale behind publication decisions, and the identities of those doing the reviewing. Journals can address high-level uncertainties about the nature of their peer review processes by providing detailed peer review policies on their publication websites. As for making the recommendations and/or identities of peer reviewers more transparent, some journals are now experimenting with more open peer review practices. Definitions and levels of open peer review can vary substantially, but the term generally refers to one or a combination of the following practices:

  • Publishing review reports alongside journal articles to make reviewer recommendations open
  • Making author and reviewer identities open to both parties and readers
  • Soliciting public peer review comments in addition to or in lieu of invited reviews

Proponents of open peer review practices argue that they promote accountability and allow for reviewer recognition. Of course there are also arguments on the opposite side of the spectrum in favor of blinded peer review to prevent skewed referee reports. For example, some argue when reviewers know the identities of authors, there is the potential for them to make recommendations based on implicit biases. As exhibited by the various definitions of open peer review, there is no one-size-fits-all approach. So this is an area where journals can and likely should weigh the pros and cons of different models as they apply to their discipline and particular publication to determine the best course of action for them.

Developing shared peer-review standards and taxonomies

Another fundamental aspect of increasing peer review transparency and ultimately trust in the process is developing more universal peer-review standards and nomenclature (to help with hurdles in communicating peer review policies like the many definitions of “open peer review”). STM recently launched a “Working Group on Peer Review Taxonomy” to address the definitional component of peer review standards that journal publishers should keep on their radar, the first draft of which is available here. Another recent example of developing shared publishing definitions that can help increase research transparency and, consequently, trust in peer review is the CRediT – Contributor Roles Taxonomy initiative, which introduces naming conventions for different types of author roles.

In addition to working to develop shared peer review taxonomies, scholarly publishing stakeholders have been exploring new methods of normalizing and expressing journal publishing standards, including via the TOP Factor. Launched by the Center for Open Science in February 2020, TOP Factor is a new journal assessment system based on the Transparency and Openness Promotion (TOP) Guidelines, which consists of eight publishing standards to improve research transparency and reproducibility. TOP Factor has the potential to foster a uniform shared framework for implementing and demonstrating adherence to journal publishing best practices, which would increase peer review transparency and make it easier to compare publisher processes and norms. The TOP Guidelines also promote FAIR data principles and open data, which could help to foster a more self-correcting research environment.

Facilitating the sharing of review reports across journals

Finally, a mounting trust concern to be addressed is the increasing peer-review burden faced by many scholars. As the rate of articles published across disciplines continues to increase, many worry that reviewers will struggle to keep up, potentially leading to more research errors slipping through the cracks. Adding another plot twist — in reality, a lot of the review work placed on scholars is redundant because they are being asked to vet papers that have already undergone peer review elsewhere.

One possible solution to help alleviate the peer review pressures scholars face is the Manuscript Exchange Common Approach (MECA), an initiative launched by the National Information Standards Organization (NISO) in May 2018. MECA aims to develop a framework for transferring manuscripts and review reports between different peer review systems. According to the MECA website sharing review reports between journals could help save an estimated “15 million hours of researcher time [that] is wasted each year repeating reviews.” In the same vein, the C19 Rapid Review Initiative, a large-scale collaboration among 20 publishers to improve the efficiency of coronavirus-related research processing, is piloting a version of review report sharing by requiring reviewers to consent to have their identities and review reports shared among participating publishers’ and journals.

Putting it all together

The fact that peer review is imperfect, like any human endeavor, is no secret or surprise. Working to make all aspects of the peer review process more transparent could help safeguard against inevitable human errors. At the same time, it could expand the potential for more widespread assessment of peer review approaches and outcomes, something that scholars argue is needed to improve and increase trust in peer review.

Source: https://blog.scholasticahq.com/

Linkedin   
Whatsapp   

4 ways to get higher quality peer review comments

Does the quality of the peer review comments your journal receives vary more than it should?

Well-thought-out reviewer comments aside, many editorial teams find that they too often receive comments that are:

  • Meandering and difficult to interpret
  • Sparse or lacking the level of detail needed to be constructive
  • Hyper-critical in incidental areas while missing the bigger picture

Such unfocused or insufficient reviewer comments can create snags in peer review and cause frustration for all parties involved. Many editorial teams struggle with wanting to ensure that they’re providing authors with robust feedback but feeling like reviewer comment quality is outside of their control. While you can’t guarantee that your journal will receive top-notch reviewer comments all of the time, there are some steps your editorial team can take to improve reviewer comment quality.

Use a peer review feedback form

One of the best steps editorial teams can take to improve the quality of reviewer comments their journal receives is to require all reviewers to complete a standardized feedback form. When reviewers are left to fashion comments from a set of instructions, no matter how thorough and well-formatted they may be, the likelihood of some reviewers misinterpreting or skimming over expectations is high. However, using a required, standard feedback form journals can ensure that all reviewers address the same key manuscript areas while deterring reviewers from giving tangential feedback. Reviewer feedback forms work best when they are automated using peer review software, so reviewers literally can’t submit comments without answering all of the necessary questions.

Find the right balance of feedback form questions and make them specific

The results of your journal’s reviewer feedback form will only be as good as its design. Once your journal has a feedback form set up, track the quality of responses submitted and make adjustments to the questions as needed to improve feedback outcomes.

The best feedback forms have a balanced number of questions that are easy to interpret. Aim to provide enough questions to adequately guide reviewers but not so many that you begin to overwhelm them. Remember, reviewer fatigue is real! Make questions specific so reviewers understand the goal of each question—this includes writing questions clearly and formatting questions to reflect the level of feedback needed. For example, journals can use open-ended questions for substantive feedback and Likert Scale questions for high-level assessments and recommendations.

Some common feedback form flaws to avoid are:

  • Requiring nonessential questions: As noted, it’s important to have a balanced number of feedback form questions. Avoid filling up your form with questions you don’t need.
  • Combining questions: Make sure that each question in your feedback form is only about one thing. When questions touch on multiple areas it’s more likely for reviewers to submit unclear or partial responses.
  • Not asking for a publication recommendation: While the decision to accept a manuscript, reject it, or ask for revisions is ultimately up to your editors, asking reviewers for a direct recommendation can help ensure you’re interpreting their comments correctly.

It’s a good idea to also include an open-ended response field for comments to the editor at the end of your feedback form. This will ensure that reviewers are able to comment on all aspects of the manuscript that they think are necessary, even if your form doesn’t address them all. Over time, this field can help you to keep improving your feedback form questions by revealing any important assessment areas you’ve missed.

Limit revise and resubmit rounds

In addition to taking steps to improve initial reviewer comments, journals should set clear parameters for revise and resubmit rounds to ensure subsequent comments remain helpful and on track. As explained by former managing editor of Aztlán Journal of Chicano Studies, Wendy Laura Belcher, reviewer feedback can become less focused over multiple rounds of revisions and review. Belcher said that a common problem that editors should look out for is reviewers picking apart new areas of a submission in each round of review. This can turn into a frustrating cycle for authors and reviewers who just want to fulfill their respective expectations. Journals can avoid these situations by setting clear parameters for which aspects of the manuscript reviewers should be commenting on as well as a limit to the number of rounds of revision a manuscript can go through.

Help reviewers improve

Finally, your journal can help reviewers grow and improve the reviewer comments you receive by giving reviewers insight into the quality of their feedback. Journals can either provide reviewers with direct feedback on their comments, noting what was most useful and whether any comments were unnecessary or confusing, or they can send reviewers copies of author decision letters to provide insight into how their comments were interpreted and relayed to the author. Additionally, journals working with more early-career reviewers may want to provide some training materials such as a guide to writing constructive reviewer comments.

Source: Scholastica

Linkedin   
Whatsapp   

3 Steps to Ensure Your Journal Receives Punctual Peer Reviews

Journal editors spend much of their time working to build out a network of possible peer reviewers for new submissions. It can be difficult to find scholars within a journal’s subject area, especially for niche publications, who are able and willing to provide regular peer reviews. As a result, most editors are constantly on the hunt for new reviewers. After searching for and securing reviewers for a manuscript, the last situation that an editor wants to be in is having one or more of those reviewers go silent.

Unresponsive reviewers can cause significant delays in a journal’s time to publication, creating stress for editors trying to get out their next issue on time and frustrating authors who are hoping to get a decision as soon as possible. What can editors do to avoid sending review assignments and hearing crickets? It can be difficult to predict whether a reviewer might become unresponsive. However, there are ways for editors to try to avoid such situations. Below we outline 3 steps you can take.

1. Check your journal’s past reviewer data before sending a review request

As you build out a reviewer database for your journal, one of the best things you can do to ensure timely reviews is to keep track of all your journal’s past reviewer activity. This can most easily be achieved via peer review software. Many systems, like Scholastica, will automatically track your journal’s reviewer activity with no added work on your part. However, if you’re not yet using a peer review system you can start tracking some reviewer stats in a spreadsheet.

Among the primary reviewer stats your journal should track are:

  • Average days for completing a review assignment
  • Pending review requests from your journal
  • Currently late reviews
  • Number of completed reviews

From the above stats you can start to glean insights into which reviewers you should reach out to and which you may want to wait on or even remove from your list. If you find that a reviewer already has a late or pending review, you’ll quickly know not to reach out to them until those assignments are addressed. Conversely, if you find that a reviewer has completed one or more reviews in a timely manner and has not declined a review invitation recently, that reviewer is likely a good candidate to contact.

Keeping a record of reviewer activity is especially important for journals with many editors, a large reviewer pool, or both. If you have multiple editors pulling from the same reviewer list without any log of reviewer activity, you’re likely to encounter more attrition in review requests because there will be a higher likelihood of editors reaching out to the same go-to reviewers too frequently and potentially turning them off from working with your journal. Even for journals with one managing editor selecting reviewers, it’s unlikely that, that editor will be able to recall each reviewer’s history with the journal off hand. Having a place for the managing editor to find reviewer data will help them avoid spending hours searching through email chains to figure out when a reviewer was last contacted and how they responded.

In order to ensure consistent data, editors should aim to incorporate peer reviewer tracking into their workflows as seamlessly as possible. The more manual steps you have to take to track reviewer activity, the more likely your editors will be to forget steps, leading to incomplete or inaccurate data. With the right peer review software, you can track reviewer activity without adding any extra steps for your team. For example, editors using Scholastica enable automatic reviewer activity tracking as soon as they invite reviewers to their journal via our system.

2. Have a set peer review timeline

Once editors have identified reviewers to reach out to, one of the most tasking parts of the peer review process can be waiting for them to acknowledge and respond to the review request. Review requests can sometimes get buried in scholars’ inboxes leading to days or even weeks of delay before they send a reply.

In order to avoid extensive wait times for reviewers to reply to invitations, one of the best things editorial teams can do is develop an established timeline for review requests. The timeline should account for one or more review reminders sent at designated times and then a final cutoff point for the reviewer to either respond to the invitation or be assumed unavailable.

Dianne Dixon, Managing Editor of International Journal of Radiation Biology piloted this approach to review requests and has seen great success. Her journal’s timeline includes sending an initial review request, sending a reminder four days later, sending a final reminder four days after that, and then finally removing the reviewer from the list after letting them know that she realizes they are likely unable to accept. In this closing email, Dixon asks reviewers to please let her know if they find they are able to review the manuscript. She said using this series of emails with a cutoff point for review responses has decreased delays in her journal’s peer review process.

3. Use automated reviewer reminders

After reviewers have accepted an assignment and agreed to a review deadline, it’s important for editors to periodically check in with them to ensure the review doesn’t fall off their radar. One of the best ways to do this is to send reminder emails at regular intervals.

Editors can try to block out time in their schedules to send review reminder emails, but with so many tasks to keep track of this can often become a bit of a chore. This is another area where peer review software can step in. Many software systems will enable editors to set up automatic weekly or bi-weekly reviewer assignment reminders, which editors can schedule to start sending as the assignment deadline approaches. It’s also a good idea to set up automatic late review reminder emails, that way journals can know late reviewers will be contacted as soon as they miss an assignment – possibly before the assigned editor even realizes.

Despite automation sometimes connoting a sense of detachment, it’s important for editors to consider how automated emails can actually help make journal communication more personal. Automated review reminders help journals stay in constant contact with reviewers and free up editors’ time for sending more thorough responses to specific reviewer questions among other benefits.

Overall emphasize the importance of reviewer communication

There is always the chance that a reviewer will accept an assignment with every intention of completing it on time but then become preoccupied with other obligations and fall behind on their deadline. Such cases are unpredictable for both the journal and reviewers.

Often when reviewers suddenly become unresponsive, the situation can be solved by encouraging reviewers to communicate if their circumstances have changed. Reviewers may be hesitant to go back on their promise, so it’s important for editors to remind them that in cases where they simply can’t complete an assignment on time the best course of action is to make that known.

The Committee on Publication Ethics’ (COPE) “Ethical Guidelines for Peer Reviewers“ stresses the importance of reviewers acknowledging if they are no longer able to complete an assignment. It states that reviewers should “always inform the journal promptly if your circumstances change and you cannot fulfill your original agreement or if you require an extension.” Journals can point reviewers to these guidelines or simply remind them in review requests that the journal encourages reviewer updates, even if it means reviewers having to decline an invitation they previously accepted.

Source: Scholastica

Linkedin   
Whatsapp   

New features: Better reviewer communication, public article analytics and more!

It’s been an exciting first half of the year for Scholastica. We now have over 700 journal users and we’re continuing to roll out new features to keep improving our software in order to best serve journal editors, authors, and reviewers. Recently, we introduced some updates to both peer review and open access publishing, including:

  • Improvements to how editors and reviewers communicate with each other
  • Easier file downloading for editors
  • Faster journal website load times and public analytics for HTML articles

Read on for the full details!

Journals can set automatic review reminder email frequency

We know that efficient communication is key throughout peer review. The easier it is for editors to check in on reviewers’ progress without inundating them with emails and the easier it is for reviewers to quickly communicate their recommendations to editors the better. To that end, we’ve introduced two new features to improve editor and reviewer communication.

First, we’ve given journals greater control over automated reviewer reminder emails. Now, editors can decide how frequently they want reviewers to receive automatic reminders at each stage of the peer review process — before the reviewer has responded to an invitation, after response but before the review deadline, and once the deadline has passed and the review is late.

The admin editors of journals can now set email frequencies for the following review reminder categories by going to My Journals > Settings > Configuration Options:

  • Reminders to accept outstanding invitations
  • Reminders to submit accepted reviews
  • Reminders to submit late reviews

These options will enable editors to more closely control the cadence of their reviewer outreach before and after assignments are due. For example, if your journal does not want to send reviewers reminder emails to complete their reviews unless they are late then you can elect to not send any reminders to submit accepted reviews and choose to instead only send reminders for late assignments.

Reviewers can set file permissions for feedback form attachments

In addition to giving editors more control over reviewer reminders, we’ve also made it easier for reviewers to quickly designate whether files they are attaching to their review feedback form are intended just for the journal’s editors or for the editors and the manuscript’s authors. Reviewers now have the option to upload any accompanying files to either an editors only section or an editors and authors section. With this new feature, the intended audience of each reviewer attachment should be clear, helping to avoid back and forth between editors and reviewers as well as the potential of editors forgetting to share attachments intended for the author.

Editors can download all manuscript files at once

It’s also easier for editors to access the manuscript files that they need. We know that downloading manuscripts with multiple attachments can be cumbersome, so we’ve made it possible to download a manuscript and all of its accompanying files with one click, in addition to the ability to download individual files. Now, when editors go to a manuscript’s work area they will see a “download all files” link. Click the link and get everything you need!

Source: https://blog.scholasticahq.com

Linkedin   
Whatsapp