Important
This text is our team's initial draft for organizing internal code reviews. Our main focus is reviewing code for "one-off" analyses associated with a manuscript, as opposed to continuous reviewing of a shared code library. In our case, this text is part of a wider set of principles how we (want to) work, especially with code. Only the parts about code review and style are included below.
The text is published here to encourage experimentation and discussion with other teams. Feel free to copy and adapt it to your own needs. Note that we expect to change this text once we gain more practical experience.
Similar to peer reviews of our manuscripts, we do (internal) peer reviews of our code.
Note
In the our lab, code reviews are credited as follows:
- You become co-author of the reviewed code. This should be stated wherever authorship is mentioned, e.g., within the code files or the manuscript contributions section.
- You become eligible for co-authorship of the project's manuscript(s). Co-authorship on a manuscript generally requires additional contribution, like discussing results, reviewing the manuscript, etc.
The primary goal of our code reviews is to check code correctness, relative to its intended purpose. Knowing the code's purpose, reviewers may find semantic or logical mistakes that lead to incorrect results, despite the code "running fine" from the computer's perspective. By investing in reviews, we minimize such costly mistakes and thus gain efficiency and reliability in the mid/long term.
As secondary goals, reviews may help us share knowledge and make our code more usable, efficient, and adaptable.
For each round of code review, you take one of two roles:
- Author: The person who wrote the code to be reviewed.
- Reviewer: The person* who reviews the code and suggests corrections.
(*Tools cannot review correctness, see tip below.)
Generally, pick a reviewer who is familiar with the project's topic, methods, and programming language. For projects with multiple members, pick another project member as code reviewer. For single-person projects, you may find an "independent" code reviewer among other lab members.
Tip
Tools like Large Language Models (LLMs) may assist with writing and reviewing code, but they cannot reliably assess correctness, as they lack understanding of the code's purpose.
- When using an LLM as a service, always consider data privacy laws, copyright, and professional ethics. If in doubt, only use services that comply with EU data privacy regulations.
- Never use LLM-generated code you don't understand, especially in critical contexts like your own scientific work.
This section offers some guidelines to make code reviews manageable and effective.
Consider doing multiple rounds of review for different project phases and/or code sections. The author and reviewer shall agree on a protocol that specifies the scope and time for each round.
Tip
It is best practice to document a review protocol within the project, e.g., as a Git branch and/or issue.
🏗️ Please contribute approaches that work for you into this section.
Below are some sensible defaults; modify them as needed.
- Scope:
- Agree on a goal; the default is to check code correctness, while other aspects of "good" coding style may be ignored.
- Agree on a list of code files/sections for review. For code that requires input or generates output, agree what inputs or outputs to use.
- Time: Agree on a suitable duration for the review round and set deadlines for yourself.
As code author, try to make the review efficient through good preparation.
- List the code sections you want to submit for a review round. Especially select important code sections and those that you are unsure about. Comment on why you selected these sections. Try to limit the scope of each round to what can be reasonably reviewed within the agreed time frame.
- Ensure that all submitted code has a high-level description of its purpose. This is usually best documented within the code itself. Without an explicit purpose, it is impossible to assess correctness.
- Provide example input data and the expected output for the code. For example, for code that generates a figure, include the underlying data and the figure that was generated on your machine.
As a reviewer, try to only focus on the agreed review goals.
- Focus on mistakes of the code's logic based on its purpose. Prioritize severe errors that compromise the validity of results, like mistakes that undermine data collection (e.g., randomization errors) or the interpretation of results (e.g., statistical errors).
- If the code's purpose is unclear to you, avoid guessing; instead, ask the author for clarification.
- Avoid attention on stylistic issues, unless agreed otherwise, or they impair your (efficient) understanding. If you have trouble understanding the code, ask the author to improve the code style as a precondition for reviewing for correctness.
- You are generally expected to run the code on your machine, not just a readi through. Compare the output on your machine to the output provided by the author.
Tip
Always review the code, never the author.
Performing or receiving a code review may feel annoying or stressful at first. Naturally, coding skills differ between team members. In brief, we try to be excellent toward each other and shall respect each other's time and effort.
- The author should submit code that reasonably adheres to the universal principles of "good" code.
- The author shall consider the reviewer's suggestions and credit their work:
- Reviewers become co-authors of the reviewed code, to acknowledge their non-trivial contribution. This should be stated wherever authorship is mentioned, e.g., within the reviewed code files, the project README, and the contributions section of associated manuscripts. Depending on the extent of the reviewer's contribution, you may clarify their role as "reviewer" or similar.
- Reviewers also become eligible for co-authorship of associated manuscripts, so they should be given the opportunity to contribute to the project's manuscript(s) beyond the code review.
- The reviewer shall limit their review within the agreed scope and phrase all suggestions respectfully.
- For each (perceived) mistake, the reviewer shall explain their reasoning and preferably sketch how to implement an improvement.
- We aim to write "good" code, as outlined by the universal properties of well-written code (Roth et al., 2025):
- Correct: The code is free from errors, ensuring accurate and expected outcomes.
- Usable: The code is easy to understand and use.
- Reliable: The code's output is consistent across different environments.
- Efficient: The code executes quickly and uses computational resources wisely.
- Adaptable: The code can easily be applied to problems similar to the one it was originally written for.
- We commit to switching regularly from "prototyping mode" (quick-and-dirty, results-driven, exploratory coding) to "development mode" (improving the code itself) throughout the research process. We acknowledge that switching between modes helps us write "good" code for ourselves and that code shared along our manuscripts will be easier to understand and to reproduce by other researchers.
- Currently, we do not adopt a common style guide but we encourage exploring possibilities and making suggestions for future style conventions.
- Roth, J., Duan, Y., Mahner, F. P., Kaniuth, P., Wallis, T. S. A., & Hebart, M. N. (2025). Ten principles for reliable, efficient, and adaptable coding in psychology and cognitive neuroscience. Communications Psychology, 3(1), 62. https://doi.org/10.1038/s44271-025-00236-3