Guidelines for Book Review

The Journal for ReAttach Therapy and Developmental Diversities (JRTDD) will publish book reviews on major books across fields and sub- fields covered by the journal.

  • If you are interested in reviewing a book for the Journal, please send an expression of interest to the Editor-in-chief on journaljrtdd@gmail.com.
  • Authors and publishers should also contact the Editor if they would like their books to be reviewed in the journal.

The JRTDD seeks reviews that assess a book’s strengths and weaknesses and locate it within the current field of scholarship. A review should not simply be a listing of contents, though its overall organization and emphasis are up to the individual reviewer. Reviewers should avoid lists of minor imperfections (e.g. misplaced commas) but should not hesitate to draw attention to serious editorial problems and errors of fact or interpretation. It is also helpful if reviewers indicate for which audiences and libraries the book seems appropriate. The Book Review Editors reserve the right to edit for content and length. In summary, book reviews should be timely and objective and should consider the following:

  • The intended audience for the book and who would find it useful
  • The main ideas and major objectives of the book and how effectively these are accomplished
  • The soundness of methods and information sources used
  • The context or impetus for the book – political controversy, review research or policy, etc
  • Constructive comments about the strength and weaknesses of the book

Book reviews for the Journal for ReAttach Therapy and Developmental Diversities should be should be double spaced, using a standard sized font Times New Roman. All material, including long quotes, should be double-spaced. Please use minimal style formatting in your document. Reviews of single books should be between 750-1000 words.

The review should begin with a hanging indent paragraph for each book in the review that includes its title, subtitle, author/s or (editor/s), and publishing information in the following format:

All subsequent paragraphs should be indented. There are no footnotes in the Journal for ReAttach Therapy and Developmental Diversities book reviews. References to texts not under review should be parenthetical only. Also, please type your name and institution, exactly as you wish it to be published, at the end of the review. For example: Frederik Johnson, University of Amsterdam, The Netherlands.

The following information should be given about the book being reviewed at the start of each review:

  • Author/Editor Name, Book Title, Publisher, Year of Publication, ISBN: 000-0-00-000000-0, Number of Pages, Price
Linkedin   
Whatsapp   

What is Open Peer Review?

Peer review is a way by which manuscripts can be assessed for their quality. Reviewers scrutinize the draft of a journal article, grant application, or other work and provide feedback to the author(s) for improving the text. Reviewers don’t just read the text, but also evaluate whether the research presented is sound, whether the methods used in the research are in keeping with basic scientific protocols, and whether the analysis of the results is valid. In addition, reviewers need to determine whether the subject matter (e.g., a research study) is in keeping with the journal’s particular field of study and is a novel scientific concept to warrant publishing it.

Even with the system in place, there are conflicting views about peer review and their merits. Most researchers believe that the current peer review system is lacking. However, they also agree that peer reviews are valuable in helping improve their papers, which help get them published. Still, a 2008 study revealed approximately one-third of those asked thought that the system could be improved.

Reviewer Anonymity

As mentioned, peer reviewers also assess grant applications and this process has its flaws. In an article in Times Higher Education, the author describes the grant review process as one of stifling innovative research and paring down each application into a numerical score. Funding sources presume that the reviewers are unbiased and knowledgeable. The funding agencies often base their decisions on these scores without having ever read the actual research proposal. The suggestion of having reviewers reveal their identities would remove any doubt of reviewer qualifications and hopes to hold the reviewers more accountable for these types of scoring protocols.

In an article published on F1000 Research, an open-research publishing platform, author Tony Ross-Hellauer describes some of the criticisms of the anonymous peer review as follows:

  • Unreliable and inconsistent: Reviewers rarely agree with one another. Decisions to reject or accept a paper are not consistent. Papers have been known to be published but then rejected when resubmitted to the same journal later.
  • Publication delays and costs: At times, the traditional peer review process can delay publication of an article for a year or more. When “time is money” and research opportunities must be taken advantage of, this delay can be a huge cost to the researcher.
  • No accountability: Because of anonymity, reviewers can be biased, have conflicts of interest, or purposely give a bad review (or even a stellar review) because of some personal agenda.
  • Biases: Although they should remain impartial, reviewers have biases based on sex, nationality, language or other characteristics of the author(s). They can also be biased against the study subject or new methods of research.
  • No incentives: In most countries, reviewers volunteer their time. Some feel that this is part of their job as a scholar; however, others might feel unappreciated for their time and talent. This might have an impact on the reviewer’s incentive to perform.
  • Wasted information: Discussions between editors and reviewers or between reviewers and authors are often valuable information for younger researchers. It can help provide them with guidelines for the publishing process. Unfortunately, this information is never passed on.

 

Because of these obvious dissatisfactions with the peer review process, a change to “open peer review” or “OPR” has been suggested. The premise is that an open peer review process would avoid many of the issues listed above.

What Is OPR?

OPR was first considered about 30 years ago but became more popular in the 1990s. Originally defined only as revealing a reviewer’s identity, it has now expanded to include other innovations. Although suggested as a means by which to help streamline the process and ensure honest reviews, the actual definition OPR has eluded those in the research and publishing fields. A plethora of different meanings prompted a study on the accepted definitions of the term.

In an article published on F1000 Research, author Tony Ross-Hellauer delves into the several different definitions of OPR and created a “corpus of 122 definitions” for the study. Ross-Hellauer reminds us that there is yet no standard definition of OPR. On the contrary, many definitions overlap and even contradict each other as follows:

  • Identities of both author and reviewer are disclosed,
  • Reviewer reports are published alongside articles,
  • Both of these conditions,
  • Not just invited experts are able to comment, and/or
  • Combinations of these and other new methods.

 

These definitions are very open ended and those discussing OPR use one, some, or all of these in combination.

The Study on OPR

Ross-Hellauer reviewed the literature (e.g., Web of Science, PubMed, Google Scholar, and BioMed) for articles that mentioned “open review” or OPR and found 122 definitions of the term! The author then reviewed and classified all 122 definitions according to a set of traits that were new to the traditional peer review process. He concluded by defining seven traits of OPR and offered these as a basis for the definition of OPR as follows:

  • Open identities: Authors’ and reviewers’ identities revealed.
  • Open reports: Reviews published with the article.
  • Open participation: Readers able to contribute to review process.
  • Open interaction: Reciprocal discussions between parties.
  • Open pre-review manuscripts: Manuscripts immediately available before formal peer review.
  • Open final-version commenting: Review or commenting on final “version of record” publications.
  • Open platforms: Review facilitated entity other than the venue of publication.

 

It appears from various studies that OPR is a valuable revision to the old peer review process and its anonymity. Although most agree that peer review has always been valuable, it is not without its faults. There is hope that the OPR system will eliminate much of these criticisms.

Resource: https://www.enago.com/

Linkedin   
Whatsapp   

4 ways to get higher quality peer review comments

Does the quality of the peer review comments your journal receives vary more than it should?

Well-thought-out reviewer comments aside, many editorial teams find that they too often receive comments that are:

  • Meandering and difficult to interpret
  • Sparse or lacking the level of detail needed to be constructive
  • Hyper-critical in incidental areas while missing the bigger picture

Such unfocused or insufficient reviewer comments can create snags in peer review and cause frustration for all parties involved. Many editorial teams struggle with wanting to ensure that they’re providing authors with robust feedback but feeling like reviewer comment quality is outside of their control. While you can’t guarantee that your journal will receive top-notch reviewer comments all of the time, there are some steps your editorial team can take to improve reviewer comment quality.

Use a peer review feedback form

One of the best steps editorial teams can take to improve the quality of reviewer comments their journal receives is to require all reviewers to complete a standardized feedback form. When reviewers are left to fashion comments from a set of instructions, no matter how thorough and well-formatted they may be, the likelihood of some reviewers misinterpreting or skimming over expectations is high. However, using a required, standard feedback form journals can ensure that all reviewers address the same key manuscript areas while deterring reviewers from giving tangential feedback. Reviewer feedback forms work best when they are automated using peer review software, so reviewers literally can’t submit comments without answering all of the necessary questions.

Find the right balance of feedback form questions and make them specific

The results of your journal’s reviewer feedback form will only be as good as its design. Once your journal has a feedback form set up, track the quality of responses submitted and make adjustments to the questions as needed to improve feedback outcomes.

The best feedback forms have a balanced number of questions that are easy to interpret. Aim to provide enough questions to adequately guide reviewers but not so many that you begin to overwhelm them. Remember, reviewer fatigue is real! Make questions specific so reviewers understand the goal of each question—this includes writing questions clearly and formatting questions to reflect the level of feedback needed. For example, journals can use open-ended questions for substantive feedback and Likert Scale questions for high-level assessments and recommendations.

Some common feedback form flaws to avoid are:

  • Requiring nonessential questions: As noted, it’s important to have a balanced number of feedback form questions. Avoid filling up your form with questions you don’t need.
  • Combining questions: Make sure that each question in your feedback form is only about one thing. When questions touch on multiple areas it’s more likely for reviewers to submit unclear or partial responses.
  • Not asking for a publication recommendation: While the decision to accept a manuscript, reject it, or ask for revisions is ultimately up to your editors, asking reviewers for a direct recommendation can help ensure you’re interpreting their comments correctly.

It’s a good idea to also include an open-ended response field for comments to the editor at the end of your feedback form. This will ensure that reviewers are able to comment on all aspects of the manuscript that they think are necessary, even if your form doesn’t address them all. Over time, this field can help you to keep improving your feedback form questions by revealing any important assessment areas you’ve missed.

Limit revise and resubmit rounds

In addition to taking steps to improve initial reviewer comments, journals should set clear parameters for revise and resubmit rounds to ensure subsequent comments remain helpful and on track. As explained by former managing editor of Aztlán Journal of Chicano Studies, Wendy Laura Belcher, reviewer feedback can become less focused over multiple rounds of revisions and review. Belcher said that a common problem that editors should look out for is reviewers picking apart new areas of a submission in each round of review. This can turn into a frustrating cycle for authors and reviewers who just want to fulfill their respective expectations. Journals can avoid these situations by setting clear parameters for which aspects of the manuscript reviewers should be commenting on as well as a limit to the number of rounds of revision a manuscript can go through.

Help reviewers improve

Finally, your journal can help reviewers grow and improve the reviewer comments you receive by giving reviewers insight into the quality of their feedback. Journals can either provide reviewers with direct feedback on their comments, noting what was most useful and whether any comments were unnecessary or confusing, or they can send reviewers copies of author decision letters to provide insight into how their comments were interpreted and relayed to the author. Additionally, journals working with more early-career reviewers may want to provide some training materials such as a guide to writing constructive reviewer comments.

Source: Scholastica

Linkedin   
Whatsapp   

3 Steps to Ensure Your Journal Receives Punctual Peer Reviews

Journal editors spend much of their time working to build out a network of possible peer reviewers for new submissions. It can be difficult to find scholars within a journal’s subject area, especially for niche publications, who are able and willing to provide regular peer reviews. As a result, most editors are constantly on the hunt for new reviewers. After searching for and securing reviewers for a manuscript, the last situation that an editor wants to be in is having one or more of those reviewers go silent.

Unresponsive reviewers can cause significant delays in a journal’s time to publication, creating stress for editors trying to get out their next issue on time and frustrating authors who are hoping to get a decision as soon as possible. What can editors do to avoid sending review assignments and hearing crickets? It can be difficult to predict whether a reviewer might become unresponsive. However, there are ways for editors to try to avoid such situations. Below we outline 3 steps you can take.

1. Check your journal’s past reviewer data before sending a review request

As you build out a reviewer database for your journal, one of the best things you can do to ensure timely reviews is to keep track of all your journal’s past reviewer activity. This can most easily be achieved via peer review software. Many systems, like Scholastica, will automatically track your journal’s reviewer activity with no added work on your part. However, if you’re not yet using a peer review system you can start tracking some reviewer stats in a spreadsheet.

Among the primary reviewer stats your journal should track are:

  • Average days for completing a review assignment
  • Pending review requests from your journal
  • Currently late reviews
  • Number of completed reviews

From the above stats you can start to glean insights into which reviewers you should reach out to and which you may want to wait on or even remove from your list. If you find that a reviewer already has a late or pending review, you’ll quickly know not to reach out to them until those assignments are addressed. Conversely, if you find that a reviewer has completed one or more reviews in a timely manner and has not declined a review invitation recently, that reviewer is likely a good candidate to contact.

Keeping a record of reviewer activity is especially important for journals with many editors, a large reviewer pool, or both. If you have multiple editors pulling from the same reviewer list without any log of reviewer activity, you’re likely to encounter more attrition in review requests because there will be a higher likelihood of editors reaching out to the same go-to reviewers too frequently and potentially turning them off from working with your journal. Even for journals with one managing editor selecting reviewers, it’s unlikely that, that editor will be able to recall each reviewer’s history with the journal off hand. Having a place for the managing editor to find reviewer data will help them avoid spending hours searching through email chains to figure out when a reviewer was last contacted and how they responded.

In order to ensure consistent data, editors should aim to incorporate peer reviewer tracking into their workflows as seamlessly as possible. The more manual steps you have to take to track reviewer activity, the more likely your editors will be to forget steps, leading to incomplete or inaccurate data. With the right peer review software, you can track reviewer activity without adding any extra steps for your team. For example, editors using Scholastica enable automatic reviewer activity tracking as soon as they invite reviewers to their journal via our system.

2. Have a set peer review timeline

Once editors have identified reviewers to reach out to, one of the most tasking parts of the peer review process can be waiting for them to acknowledge and respond to the review request. Review requests can sometimes get buried in scholars’ inboxes leading to days or even weeks of delay before they send a reply.

In order to avoid extensive wait times for reviewers to reply to invitations, one of the best things editorial teams can do is develop an established timeline for review requests. The timeline should account for one or more review reminders sent at designated times and then a final cutoff point for the reviewer to either respond to the invitation or be assumed unavailable.

Dianne Dixon, Managing Editor of International Journal of Radiation Biology piloted this approach to review requests and has seen great success. Her journal’s timeline includes sending an initial review request, sending a reminder four days later, sending a final reminder four days after that, and then finally removing the reviewer from the list after letting them know that she realizes they are likely unable to accept. In this closing email, Dixon asks reviewers to please let her know if they find they are able to review the manuscript. She said using this series of emails with a cutoff point for review responses has decreased delays in her journal’s peer review process.

3. Use automated reviewer reminders

After reviewers have accepted an assignment and agreed to a review deadline, it’s important for editors to periodically check in with them to ensure the review doesn’t fall off their radar. One of the best ways to do this is to send reminder emails at regular intervals.

Editors can try to block out time in their schedules to send review reminder emails, but with so many tasks to keep track of this can often become a bit of a chore. This is another area where peer review software can step in. Many software systems will enable editors to set up automatic weekly or bi-weekly reviewer assignment reminders, which editors can schedule to start sending as the assignment deadline approaches. It’s also a good idea to set up automatic late review reminder emails, that way journals can know late reviewers will be contacted as soon as they miss an assignment – possibly before the assigned editor even realizes.

Despite automation sometimes connoting a sense of detachment, it’s important for editors to consider how automated emails can actually help make journal communication more personal. Automated review reminders help journals stay in constant contact with reviewers and free up editors’ time for sending more thorough responses to specific reviewer questions among other benefits.

Overall emphasize the importance of reviewer communication

There is always the chance that a reviewer will accept an assignment with every intention of completing it on time but then become preoccupied with other obligations and fall behind on their deadline. Such cases are unpredictable for both the journal and reviewers.

Often when reviewers suddenly become unresponsive, the situation can be solved by encouraging reviewers to communicate if their circumstances have changed. Reviewers may be hesitant to go back on their promise, so it’s important for editors to remind them that in cases where they simply can’t complete an assignment on time the best course of action is to make that known.

The Committee on Publication Ethics’ (COPE) “Ethical Guidelines for Peer Reviewers“ stresses the importance of reviewers acknowledging if they are no longer able to complete an assignment. It states that reviewers should “always inform the journal promptly if your circumstances change and you cannot fulfill your original agreement or if you require an extension.” Journals can point reviewers to these guidelines or simply remind them in review requests that the journal encourages reviewer updates, even if it means reviewers having to decline an invitation they previously accepted.

Source: Scholastica

Linkedin   
Whatsapp   

Share, cite, mention, link JRTDD articles

Dear readers and potential authors,

As you already knew we released the newest Issue 3, Volume 1 at July 5th and you can find it here. I want to stress your attention on the importance of social media in scientific publishing. There are some scientific articles which show that papers which are sharedmentioned, linked on social media such us: Facebook, Twitter, LinkedIn, Mendeley, Research Gate, Academia.edu and others are more cited papers than those which are not. I hope all of you have at least one profile on these social media. I would like to ask you to do that with your papers published previously in our Journal for ReAttach Therapy and Developmental Diversities. On the right menu of our web site you can find social media buttons and you can do that very easily. You should go to some article which is your favorite and then share it or link it. It will take you less than one minute.

With this, you will increase the visibility of JRTDD papers and possibility to be cited by other authors. Also the journal will increase its visibility and international impact in the field of reattach therapy, developmental diversities and rehabilitation sciences as well.

Also, I hope that you will find an interest in submitting some paper in JRTDD in near future. Hoping that it would not be a problem for you, I am sending to you warm regards.

JRTDD Editor-in-chief

Linkedin   
Whatsapp   

Experiences of Family Caregivers of Individuals with ID and Dementia

Christina N. MARSACK-TOPOLEWSKI1,
Anna M. BRADY2

1Eastern Michigan University College of Health and Human Services,
School of Social Work, Michigan, USA
2Erskine College, Special Education Department, South Carolina, USA
E-mail: ctopole1@emich.edu
Received: 04-May-2020
Revised: 28-May-2020
Accepted: 02-June-2020
Online first: 03-June-2020

Abstract

Introduction: Dementia poses a number of impairments in cognitive functioning impacting everyday operational tasks and functions. Individuals with intellectual disability (ID) may experience dementia earlier and at a greater rate than the general population. Dementia can pose complex challenges for individuals with ID and their caregivers.

Aim: A qualitative phenomenological study was used to examine the lived experiences of caregivers of individuals diagnosed with both ID and dementia.

Method: Individual interviews were conducted among six participants, who were all family caregivers of individuals diagnosed with both ID and dementia.

Results: Based on the results from the content analysis of interview responses, four themes emerged: (a) difficulty getting a dementia diagnosis, (b) barriers to obtaining services, (c) caregiving realities and challenges, and (d) rewards of caregiving.

Implications for Practice: To support caregivers, practitioners should be adequately trained on this dual diagnosis to assess the support needs in helping caregivers obtain adequate services.

Conclusion: As individuals with ID continue to live longer and age, many will experience dementia. Caregivers of individuals with ID and dementia are often an overlooked, vulnerable population. Practitioners should be aware of their needs in order to provide adequate support to this caregiving population and individuals with ID and dementia.

Key words: caregiving, dementia, intellectual disability, developmental disabilities

Citation: Marsack-Topolewski, N. C., Brady, M. A. Experiences of Family Caregivers of Individuals with ID and Dementia. Journal for ReAttach Therapy and Developmental Diversities, 2020 Jul 05; 3(1):54-64. https://doi.org/10.26407/2020jrtdd.1.29

Full Text Article

Linkedin   
Whatsapp   

Effect of auditory training intervention on auditory perception problem of children with perceptual disorders in Nigeria

Patricia KWALZOOM LONGPOE
Department of Special Education & Rehabilitation Sciences
University of Jos, Jos. Nigeria
E-mail: atinuola70@gmail.com
Received: 20-March-2020
Revised: 17-April-2020
Accepted: 22-April-2020
Online first: 23-April-2020

Abstract

Introduction: Perceptual disorders are a broad group of disturbances or dysfunctions of the central nervous system that interfere with the conscious mental recognition of sensory stimuli. Such conditions are caused by lesions of specific sites in the cerebral cortex that may result from any illness or trauma affecting the brain at any age or stage of development.

Purpose: The purpose of the study was to find and establish the effect of auditory training intervention on the auditory perception problems of children with perceptual disorders in Alheri Special School, Yangoji, Kwali Abuja, Nigeria.

Methods: This study adopted quasi-experimental design. Specifically, the Case Study Report is applied in this study, with two (2) children identified with perceptual disorders as participants for the study. Two set of instruments were adapted and validated.

Results: The results of the study revealed that auditory perception of child A and B at pre-test are significantly low, and an increase in the levels of auditory perception were recorded for the two children post-test. The findings also showed the extent of which auditory training improves auditory discrimination, awareness, figure-ground, memory and auditory blending of children with perceptual disorders.

Conclusion: The study concluded that children with perceptual disorders who have auditory perceptual disorders have improved in their auditory perception, and there is need for more auditory training therapy for children with perceptual disorders. The study recommended that teachers and professionals should develop a positive attitude towards auditory training therapy for children with perceptual disorders.

Key words: Perceptual Disorders, Auditory Perception, Auditory Training

Citation: Kwalzoom Longopoe P. Effect of auditory training intervention on auditory perception problem of children with perceptual disorders in Nigeria. Journal for ReAttach Therapy and Developmental Diversities, 2020 Jul 05; 3(1):42-53. https://doi.org/10.26407/2020jrtdd.1.27

Full Text Article

Linkedin   
Whatsapp   

Dissociative Phenomenology and General Health in Normal Population

Sushma RATHEE1,
Pradeep KUMAR2

1Department of Psychology, Mahrishi Dayanand University,
Rohtak, Haryana, India
2Consultant Psychiatric Social Work, Pt. B.D.S., PGIMS, Rohtak, India
E-mail: sushmaratheecp@gmail.com
Received: 01-June-2020
Revised: 23-June-2020
Accepted: 02-July-2020
Online first: 03-July-2020

Abstract

Background: Dissociative symptoms are most commonly found in females and adolescents, and when discussing their background, they can be from lower socio-economic backgrounds and rural areas. They are always preceded by psychosocial stressors. Dissociative disorders previously known as “hysteria” have been described since antiquity and Hippocrates even hypothesised “wandering uterus” to be the cause for dissociation in females. With the advances in science, there has been shift from these religious and spiritual concepts to a scientific basis for dissociation.

Aim: To assess the dissociative phenomenology in normal population and to assess the subjective health in normal population.

Methods: A group of 100 (50 females & 50 males) were selected from the community using a snowball sampling technique.

Tools: Socio-demographic data sheet, General Health Questionnaire-12 and Dissociative Experience Scale-II were used.

Results: The study found that females differ from males in the reporting of subjective health rating (X2=5.76, p=0.01) and similar results shown in terms of dissociative phenomenology (X2=67.76, p=0.001).

Discussion: It has been found that only 4% from the female group and 2% from the male group rated their health under the “normal” category. 52% of females and 64% of males were categorised under “mild ill health” and 24% to 26% were in “moderate ill health”, whereas 20% of female participants and 8% of male participants rated their health as “severely ill”. In another domain of the study, dissociative phenomenology, 32% of female participants reported severe dissociative symptoms and 38% of male participants also showed similar results.

Conclusion: Dissociative disorder significantly affects the population but it is hard to diagnose due to factors such as; cultural factors, socio-economic factors etc. The study shows clearly that dissociative symptoms are found in the general population also.

Key words: Dissociation, Phenomenology, General Health, Disorder, Healthy population

Citation: Rathee, S., Kumar, P. Dissociative Phenomenology and General health in Normal Population. Journal for ReAttach Therapy and Developmental Diversities, 2020 Jul 05; 3(1):34-41. https://doi.org/10.26407/2020jrtdd.1.32

Full Text Article

Linkedin   
Whatsapp   

JRTDD into Mendeley

Dear readers,

I want to inform you that JRTDD articles have been deposited in Mendeley.

What is Mendeley?

Mendeley is a free reference manager that can help you collect references, organize your citations, and create bibliographies.

The strength of Mendeley, however, is what it adds to that. Mendeley is also an academic social network that enables you to share your research with  others. Researchers can collaborate online in public or private groups, and search for papers in the Mendeley group database of over 30 million papers. Mendeley can help you connect with other scholars and the latest research in your subject area. Because Mendeley is now owned by Elsevier, the leading provider of science and health information, it integrates with ScienceDirect.

Mendeley is a research management tool.

With Mendeley, you can:

  • Collect references from the Web and UCI databases
  • Automatically generate citations and bibliographies
  • From within your citation library, read, annotate and highlight PDFs
  • Collaborate with other researchers online
  • Import papers from other research software
  • Find relevant papers based on what you’re reading
  • Access your papers from anywhere online
  • Read papers on the go with your iPhone or iPad
  • Build a professional presence with your Mendeley profile

Mendeley works with Windows, Mac and Linux.

Source: https://guides.lib.uci.edu/

JRTDD Editor-in-chief

Linkedin   
Whatsapp   

JRTDD indexed into QOAM (Quality Open Access Market)

Dear readers,

We have another indexing of JRTDD into QOAM (Quality Open Access Market).

What is QOAM?

Quality of Service

It is important to realise that ‘quality’ in the context of QOAM relates to the quality of a journal’s service to authors, rather than to a hypothesized quality of a journal’s scientific and scholarly content as based on citation metrics.

In QOAM, academic authors score the experience they have had with the journal’s peer review and editorial board from 1 to 5 via a concise journal score card. The QoS indicator of a journal is then defined as the product of the average score of the journal and the ‘robustness’ of this score.

The robustness relates the number N of score cards to the number A of articles (read DOIs) of the journal, both measured over the same period of time. As the number of articles of a journal varies widely, some logarithmic scaling is used to bring the result within scope. Journals with less than 10 articles are left aside. The time period is a moving window over the current year and the previous two years.

The actual formula for the robustness is: 1 + log (N/log A), with A ≥ 10.

Price information

In QOAM, the publication fee of a journal is found behind the tab ‘Price information’ on the detail page of a journal under the respective headings ‘List price’ and ‘My discount’. The first one is gathered from the journal’s web site; information about institutional discounts comes from licence brokers, like SURFmarket, publishers or libraries.

Privacy policy

QOAM is a free service, based on academic crowd sourcing. QOAM uses no cookies and can be visited anonymously. Conversely, author reviews in QOAM are named.
In order to publish a score card in QOAM one has to log in via one’s institutional email address. In practice this means that QOAM collects the names and institutional email addresses of the reviewers. No other information is collected. The names are used to sign the score cards and are publicly visible. An author’s institutional email address, however, is only shown to other authors of score cards. No other uses of these data are foreseen.
Underlying this policy are the views that (1) anonymous score cards are prone to misuse and should be avoided in QOAM and (2) authors of score cards should be able to contact each other for dialogue.
Finally, QOAM uses the https protocol for secure exchange of data. QOAM data are stored in the Netherlands and governed by Dutch c.q. European law.

Source: https://www.qoam.eu/

Linkedin   
Whatsapp