Or this could go in the community engagement section
Perhaps one of the most impactful outcomes of having software throughly reviewed by a community of peers, regardless of whether it results in a journal publication or not, is the improvement of the software itself. One prominent exemplar of a community led software review system is the software onboarding process (https://ropensci.org/blog/2017/09/01/nf-softwarereview/) created by the rOpenSci project (https://ropensci.org/), a non-profit initiative that promotes reproducible research through software development, advocacy, and community outreach.
rOpenSci follows an open peer review model and the entire review process is designed to be non-adversarial and constructive. rOpenSci blends best practices from publication peer-review with newer practices that address the unique challenges of reviewing software. This system combines elements of traditional academic peer review (relying on external peers), with practices from software code review such as automated checks to ensure a basic standard for code quality and completeness. The review process is not designed to serve a gate keeper (once deemed to be in scope, submissions are almost never rejected) but more as a quality filter, with the explicit goal of making software robust and elevating and standardizing software development practices in the research community.
Since April 2015 the rOpenSci community has reviewed 121 software packages, engaged 149 reviewers and these reviews have fast tracked 42 publications into the Journal of Open Source Software and 5 publications into Methods in Ecology and Evolution (a journal of Wiley publishing).
Advantages of this approach
-
The open peer review, which operates through GitHub issues on a public repository, has been critical to ensuring that the process is welcoming and constructive to everyone involved. Surveys of authors and reviewers suggest that everyone finds the process enjoyable (https://ropensci.org/blog/2018/04/17/author-survey/). The code of conduct along with community norms have made it difficult to add unsubstantiated criticisms. Given the public nature of these threads, experts who are not involved in a particular review are also able to weigh in.
-
The rOpenSci editorial team provides detailed guidelines on reviewing. Some of these are language agnostic higher level guidelines, many of which have inspired the checklist used by the Journal of Open Source Software. These include checking for open source licenses, ensuring the presence of a testing framework etc. Other guidelines are at a much lower level. These including recommendations that are language specific (rOpenSci primarily accepts software written in the R statistical language), user centric (what is the target audience), and the nature of the application. Perhaps the most interesting aspect of this process is that it is continually evolving especially as software testing frameworks, community best practices and tools changes over time. The editorial team continually updates their guidelines based on community feedback.
-
By relying heavily on code review tools (many that have been customized for rOpenSci's needs) numerous routine checks have been automated. This considerably reduces the burden on reviewers, allowing them to focus on aspects of software quality that cannot be easily automated such as human-friendly documentation, software design, integration with researcher workflows etc.
-
Although rOpenSci encourages authors to archive their software in permanent repositories such as Zenodo, ultimately it is not a publisher. However, rOpenSci has partnered with journals and made the reviews transferrable. Authors seeking a publication can elect to have the review transferred to JOSS or Methods in Ecology and Evolution. At both journals the software is not sent out for additional review and fast tracked for publication. This presents a unique model for specialized communities to take charge of a unique activity such as software review and partner with journals for publications.
Challenges of this approach
-
One of the biggest challenges to setting up a process such as this is the limited supply of skilled reviewers. Reviewers will not only need to be skilled in a domain expertise but also have working knowledge of software development (much like Pi shaped researchers https://jakevdp.github.io/blog/2014/08/22/hacking-academia/). The time required to review software packages is not that different from reviewing papers (https://ropensci.org/blog/2018/05/03/onboarding-is-work/) and there isn't enough data to show how this changes with the size of the codebase.
-
rOpenSci only reviews software that it intends to add to its collection. Therefore many types of research software would be out of scope for a review. This constraint is necessary for the process to scale, but it also leaves few options for software in other languages/areas to seek detailed review.