We created an link of OpenCitations on JRTDD web site.
OpenCitations is a scholarly infrastructure organization dedicated to open scholarship and the publication of open bibliographic and citation data by the use of Semantic Web (Linked Data) technologies, and engaged in advocacy for semantic publishing and open citations. It provides the OpenCitations Data Model and the SPAR (Semantic Publishing and Referencing) Ontologies for encoding scholarly bibliographic and citation data in RDF, and open software of generic applicability for searching, browsing and providing APIs over RDF triplestores. It has developed the OpenCitations Corpus (OCC) of open downloadable bibliographic and citation data recorded in RDF, and a system and resolution service for Open Citation Identifiers (OCIs), and it is currently developing a number of Open Citation Indexes using the data openly available in third-party bibliographic databases.
OpenCitations is currently working to expand and improve the supporting infrastructure of the OpenCitations Corpus (OCC), our open repository of scholarly citation data made available under a Creative Commons public domain dedication, which provides in RDF accurate citation information (bibliographic references) harvested from the scholarly literature. These are described using the SPAR Ontologiesaccording to the OpenCitations Data Model, and are made freely available so that others may freely build upon, enhance and reuse them for any purpose, without restriction under copyright or database law.
I want to announce that you can find #JRTDD articles into XML files which will increase the visibility and probably indexing of our journal.
What is XML?
XML is a file extension for an Extensible Markup Language (XML) file format used to create common information formats and share both the format and the data on the World Wide Web, intranets, and elsewhere using standard ASCII text.
XML is similar to HTML. Both XML and HTML contain markup symbols to describe the contents of a page or file. HTML, however, describes the content of a Web page (mainly text and graphic images) only in terms of how it is to be displayed and interacted with. For example, the letter “p” placed within markup tags starts a new paragraph.
XML describes the content in terms of what data is being described. For example, the word “phonenum” placed within markup tags could indicate that the data that followed was a phone number. An XML file can be processed purely as data by a program or it can be stored with similar data on another computer or it can be displayed, like an HTML file. For example, depending on how the application in the receiving computer wanted to handle the phone number, it could be stored, displayed, or dialed.
You can find articles from Vol.1, Issue 1 in XML here. I would like to say big gratitude to our web administrator @Gjorgji Pop Gjorgjiev for giving us this opportunity.
Theme of 2018 International Open Access Week To Be “Designing Equitable Foundations for Open Knowledge”
The 2018 Open Access Week Advisory Committee is pleased to announce that the theme for the 2018 International Open Access Week, to be held October 22-28, will be “designing equitable foundations for open knowledge.”
This year’s theme reflects a scholarly system in transition. While governments, funders, universities, publishers, and scholars are increasingly adopting open policies and practices, how these are actually implemented is still in flux. As open becomes the default, all stakeholders must be intentional about designing these new, open systems to ensure that they are inclusive, equitable, and truly serve the needs of a diverse global community. This year’s Open Access Week invites all interested stakeholders to participate in advancing this important work.
Setting the default to open is an essential step toward making our system for producing and distributing knowledge more inclusive, but it also comes with new challenges to be addressed. How do we ensure sustainability models used for open access are not exclusionary? What are inequities that open systems can recreate or reinforce? Whose voices are prioritized? Who is excluded? How does what counts as scholarship perpetuate bias? What are areas where openness might not be appropriate?
These are not questions with easy answers. Rather, they are prompts for ongoing conversations that can help ensure that the foundation for a more equitable system of open research and scholarship is created thoughtfully and collaboratively. This year’s theme highlights the importance of asking the tough questions, staying critical, and actively engaging in an ongoing conversation to learn from diverse perspectives about how to make scholarship more equitable and inclusive as it becomes more open.
Established by SPARC and partners in the student community in 2008, International Open Access Week is an opportunity to take action in making openness the default for research—to raise the visibility of scholarship, accelerate research, and turn breakthroughs into better lives. This year’s Open Access Week will be held from October 22nd through the 28th; however, those celebrating the week are encouraged to schedule local events whenever is most suitable during the year and to utilize themes that are most effective locally.
The global, distributed nature of Open Access Week will play a particularly important role in this year’s theme. Strategies and structures for opening knowledge must be co-designed in and with the communities they serve—especially those that are often marginalized or excluded from these discussions altogether.
International Open Access Week is an important opportunity to catalyze new conversations, create connections across and between communities that can facilitate this co-design, and advance progress to build more equitable foundations for opening knowledge—discussion and action that must continue throughout the year, year in and year out. Diversity, equity, and inclusion must be prioritized year-round and integrated into the fabric of the open community, from how our infrastructure is built to how we organize community events.
For more information about International Open Access Week, please visit www.openaccessweek.org. You can follow the conversation on Twitter at #OAWeek.
Translations of this announcement are available in Chinese, Hindi, Portuguese, and Spanish. If you are interested in contributing a translation of the this year’s theme or the full announcement in another language, you can find instructions for doing so here.
Graphics for this year’s Open Access Week theme are available at http://www.openaccessweek.org/page/graphics
SPARC®, the Scholarly Publishing and Academic Resources Coalition, is a global coalition committed to making Open the default for research and education. SPARC empowers people to solve big problems and make new discoveries through the adoption of policies and practices that advance Open Access, Open Data, and Open Education. Learn more at sparcopen.org.
About International Open Access Week
International Open Access Week is a global, community-driven week of action to open up access to research. The event is celebrated by individuals, institutions and organizations across the world, and its organization is led by a global advisory committee. The official hashtag of Open Access Week is #OAweek.
It’s been an exciting first half of the year for Scholastica. We now have over 700 journal users and we’re continuing to roll out new features to keep improving our software in order to best serve journal editors, authors, and reviewers. Recently, we introduced some updates to both peer review and open access publishing, including:
- Improvements to how editors and reviewers communicate with each other
- Easier file downloading for editors
- Faster journal website load times and public analytics for HTML articles
Read on for the full details!
Journals can set automatic review reminder email frequency
We know that efficient communication is key throughout peer review. The easier it is for editors to check in on reviewers’ progress without inundating them with emails and the easier it is for reviewers to quickly communicate their recommendations to editors the better. To that end, we’ve introduced two new features to improve editor and reviewer communication.
First, we’ve given journals greater control over automated reviewer reminder emails. Now, editors can decide how frequently they want reviewers to receive automatic reminders at each stage of the peer review process — before the reviewer has responded to an invitation, after response but before the review deadline, and once the deadline has passed and the review is late.
The admin editors of journals can now set email frequencies for the following review reminder categories by going to My Journals > Settings > Configuration Options:
- Reminders to accept outstanding invitations
- Reminders to submit accepted reviews
- Reminders to submit late reviews
These options will enable editors to more closely control the cadence of their reviewer outreach before and after assignments are due. For example, if your journal does not want to send reviewers reminder emails to complete their reviews unless they are late then you can elect to not send any reminders to submit accepted reviews and choose to instead only send reminders for late assignments.
In addition to giving editors more control over reviewer reminders, we’ve also made it easier for reviewers to quickly designate whether files they are attaching to their review feedback form are intended just for the journal’s editors or for the editors and the manuscript’s authors. Reviewers now have the option to upload any accompanying files to either an editors only section or an editors and authors section. With this new feature, the intended audience of each reviewer attachment should be clear, helping to avoid back and forth between editors and reviewers as well as the potential of editors forgetting to share attachments intended for the author.
It’s also easier for editors to access the manuscript files that they need. We know that downloading manuscripts with multiple attachments can be cumbersome, so we’ve made it possible to download a manuscript and all of its accompanying files with one click, in addition to the ability to download individual files. Now, when editors go to a manuscript’s work area they will see a “download all files” link. Click the link and get everything you need!
Declaration of Rights and Principles to Transform Scholarly Communication
- No copyright transfers.
- No restrictions on preprints.
- No waivers of OA Policy.
- No delays to sharing.
- No limitations on author reuse.
- No impediments to rights reversion.
- No curtailment of copyright exceptions.
- No barriers to data availability.
- No constraints on content mining.
- No closed metadata.
- No free labor.
- No long-term subscriptions.
- No permanent paywalls.
- No double payments.
- No hidden profits.
- No deals without OA offsets.
- No new paywalls for our work.
- No non-disclosure agreements.
I have great honor to inform you that Journal for ReAttach Therapy and Developmental Diversities is included in digital repository of Eprints.
What is Eprints?
By: Amy Rees, Customer Support Manager at Altmetric
What and why: does article performance data matter?
With an uptick in the number of journals being published, journals can now use article performance data to provide added value for researchers to encourage them to publish with their organisation and reward the choice to have so done. This additional value can manifest in different ways, both in raw numbers such as downloads, or more nuanced data such as discussions of an article in policy documents.
Collating different types of article performance data allows authors to see rich data associated with their publications. Authors can be rewarded for their community outreach and engagement, seeing payoff for the fruits of their labour.
The article performance data also provides a wider picture of how the research is being received and presented online. This allows authors to consider the questions: How are mainstream sources presenting their research? How are laypersons reacting to the research? How are governmental and non-governmental organisations using the research in the “real world”?
One facet of article performance data is the collection and presentation of altmetrics. Altmetrics, or alternative metrics, describe non-traditional attention to scholarly outputs. Altmetrics as an idea covers a wide range of types of attention: news stories, shares and mentions on social media, references from government policy documents or patents, and much more.
Designed to be complementary to traditional bibliometrics (citations between articles, for example), altmetrics can provide a much more immediate, richer picture of who is engaging with a piece of research, and how it was received.
Recognising author engagement efforts
Providing altmetrics to authors, an approach which has become increasingly commonplace amongst academic publishers, can help not only encourage them to disseminate the results from their research, but also to see the positive effects of doing so. Outreach and engagement by an author is a key factor in increasing the type of attention paid to a publication or the quantity of engagement. As authors take the time to blog, engage with readers on Twitter, do interviews with news programs, and address comments on public peer review forums, they are creating a conversation about their work that is not without considerable effort.
Beyond viewing the impact in the context of journal performance, this type of dissemination also allows for audiences outside of academia to develop their understanding of key issues that impact society. The effort authors put into making their research available and easy to understand is not without benefit to the academic community, whose funders often rely on public donations or whose institutions may seek to raise their profile in a specific field.
Undertaking this kind of broader engagement, and tracking its outcomes, is also increasingly used by individual researchers looking to demonstrate the influence of their work to potential funders, hiring committees, or as part of national research performance reviews.
Staying on top of the story
News coverage, now more than ever and whether true or false, dominates the public understanding of research. Popular science is discussed in major newspapers and dissected in opinion pieces. One journal article can be discussed in multiple news outlets within a short period of time, even with conflicting stories or perspectives. New research published in Science “The spread of true and false news online” indicates that false news stories are shared at a much higher rate than those based in truth. This means that “getting ahead of the story” is critical to authors and the communications teams that support them to ensure their research is being properly positioned.
If an author doesn’t have access to this news data they might not see a misinterpretation of their research and miss the opportunity to respond or clarify. Further, they might also miss the opportunity to engage with an interested community.
The aggregation of news stories, a type of altmetrics, lets the author keep track of how their research is being positioned and then they can work with the media/marketing team of a publisher to correct any issues or highlight particular feedback.
Likewise, public peer review such as Publons, the source used by Altmetric, allows researchers to see the peer reviews of their publication in an open format. This open data allows users to have a chance to respond to relevant criticism within their own field. Concerns about results, data collection, and other aspects of research can be addressed via public discussions. This encourages inter-group conversations about research and allows more direct feedback about the publication.
Expand data available to authors
Collating altmetrics data can be challenging and time consuming for authors. While a simple search online could highlight some of the news stories about a publication it masks the effort necessary to find a complete picture. Some stories may be available but, a user is constrained by the search engine they are using and what they consider “important results”.
Taking the time to truly understand the attention and engagement associated with a publication can lead authors to arduous searching of multiple platforms and sources. By providing altmetrics, journals are pulling together a snapshot of the online attention available and bringing it into a single place, saving authors time and hassle.
Altmetrics are also a great way to highlight sources that might not be available to authors otherwise. While some sources, such as Twitter and news, could be available to users, other sources are harder to find or simply unavailable to authors.
Finding mentions of a publication in policy documents is often a particular challenge for authors. Not all policy sources or organisations make their publications available in an easy to read format, such as PDF or via Word Document. They can be buried in website archives in older formats or simply hard to locate.
Further, there lies a practical issue with extracting policy references. Where does an author even start? Most governments publish thousands of different policy documents per year and it can hard to know even where to start.
As with policy documents, finding references to journal articles in syllabi is nearly impossible for an individual academic. While authors may be aware of where their articles are used in their own institution or maybe in part of their sector there are many other institutions that may be using their research for teaching. This data could be invisible to researchers who may not even realise the use and breadth of their research.
Highlight readership and academic engagement
While engagement with laypersons is a valuable understanding of dissemination, authors are likely interested in which other authors and academics are reading and considering their publications. Readership data, such as that provided by Mendeley, shows who has saved a paper in their academic library to read or use in a future publication. They also provide geographic as well as discipline data for the readers that have saved the paper. Authors can then see where researchers are saving their publications and which discipline. Additionally, Mendeley readership has been correlated, in some fields, to long-term citations.
Services such as F1000 – Faculty of 1000 – allow users to see which academics have recommended their paper. This data allows users to see that academics have not just saved the paper that they have read it and deemed it of value. Though saving a paper does denote at least interest and future engagement these types of recommendations show a direct engagement with the paper and a public endorsement of its content.
Article performance data should be viewed as interlocking and complementary data, with altmetrics working together with more traditional sources such as downloads, views, and citations. While traditional citations take longer to accrue they represent an important part of the story for understanding the performance of an article.
As with any other article performance data, a high volume of citations does not necessarily mean agreement or quality. For example, the now since retracted paper regarding Autism and MWWR written by Andrew Wakefield et al has more than a thousand citations.
Similarly, download counts and views provide another type of article performance data for authors to have a sense of the immediate response to the paper. While a high number of downloads and views don’t necessary denote agreement, it does display engagement and attention to the publication.
Providing altmetrics to authors is more than saying “You have X number of news stories and x number of Facebook posts”, though that can also be valuable attention itself. Altmetrics data allows journals to provide a more complete picture of the attention that has been paid to an author’s publication. From laypersons to science communicators to other academics and everyone in between, article performance data is a key source of valuable data for journals to provide to authors.
Article performance data is not only about addressing potential issues and positioning the research in the media, it is also about allowing authors to see the whole story. Providing a variety of different data allows authors to see areas, both disciplinary as well as geographic, that have shown interest in their publication and building connections to others who might be interested. In adding article performance data to the author package a journal is not only giving an author data, they are showing the value of a publication beyond its appearance in that journal.