Distributed Peer-Review

Distributed peer review is a method in which applicants evaluate each other’s proposals. It promotes equity, transparency, and inclusion, reduces the burden on traditional reviewers, and values the expertise of participants by making them active contributors to the assessment process.
Level 4
Challenge - Inclusivity
Challenge - Process Culture
Challenge - Collaboration
CoARA Commitment 2
CoARA Commitment 6
CoARA Commitment 10
User - Funders
User - Institutes
User - Meta-Researchers
Contributor

Experiments in Assessment WG

Last updated

March 10, 2026

WarningObjectives and potential outcome

This experiment addresses key challenges in research assessment by promoting inclusion and equity through broader participation, mitigating bias by diversifying the reviewer pool, and fostering collaboration as applicants engage directly in the peer review process, gaining insight into evaluation criteria and building mutual understanding within the research community.

Goals (what are we trying to change)

  • Improve inclusivity and transparency Make the review process more accessible and participatory, reducing reliance on closed senior reviewer networks.

  • Address reviewer scarcity Involving applicants as reviewers overcomes recruitment challenges and distributes workload more evenly.

  • Enhance fairness in interdisciplinary contexts In multi-domain calls, the diversity of the review pool increases, enabling broader, multidimensional evaluations.

  • Foster better understanding of the review process Applicants who review become more familiar with criteria, improving their proposal writing and self-assessment skills over time.

  • Reduce reviewer fatigue Sharing review responsibilities among all participants prevents overburdening a small group of traditional reviewers.

  • Ensure domain expertise In focused calls, the applicant pool is knowledgeable, providing relevant expertise for peer review.

  • Accelerate the review process Since all participants have a stake and understand the criteria, the process can be completed more rapidly.

How does an experiment with distributed peer review work? During an application round, applicants review each other’s proposals, giving feedback and recommendations for acceptance/rejection. Quality control based on peer-review is thus restricted to those who have submitted a proposal.

Research domains

This experiment can be applied across all research domains, particularly in contexts where applicant expertise aligns with the call topic. It involves a distributed peer review model in which applicants evaluate each other’s proposals, ensuring that quality control remains within a knowledgeable and invested community of peers.

Context and considerations

This experiment can fit any target group where assessment includes several similar applications and where applicants have a minimal expertise of each other’s domains. This includes:

  • Funders: looking for innovative, scalable evaluation models to reduce review fatigue and improve equity.
  • Institutions: especially those conducting internal or competitive funding calls, or managing doctoral/postdoctoral fellowships.
  • Thematic consortia or networks: where members share disciplinary language and goals, easing implementation.

Potential variables that may have an influence on the distributed peer review process and may thus be valuable elements to experiment on include the identity of reviewers (e.g., anonymity vs. known identities), the size and diversity of the reviewer pool, and the language used in proposals and reviews, which may affect accessibility, clarity, and fairness across different linguistic or cultural backgrounds.

Suggestions on how to implement

Pilot in Small-Scale Calls:
Start with small funding schemes or thematic calls (e.g. internal grants, early-career researcher awards), where the administrative complexity and risks are lower. This allows for iterative refinement.

Focus on Single-Domain or Thematic Areas:
Use single-discipline or well-defined thematic areas to ensure a more homogeneous applicant pool with shared evaluation norms and language, reducing ambiguity and potential bias.

Run Shadow Experiments:
Before full implementation, run parallel (shadow) experiments alongside traditional reviews. Compare results to evaluate feasibility, reliability, and stakeholder satisfaction.

Integrate Peer Review Training:
Provide training modules to help applicants understand the evaluation criteria and write constructive feedback. This is particularly useful for Early Career Researchers, improving both their evaluative skills and grant writing.

Implementing distributed peer review requires careful planning and resourcing:

  • HR and Coordination Costs:
    • Project Manager: to coordinate implementation, timeline, logistics.
    • Administrative Support: for applicant communication, managing review assignments.
  • Legal and Ethical Considerations:
    • Legal Advisers: to ensure GDPR compliance, particularly around anonymization and data handling.
    • Develop terms of participation and conflict of interest policies.
  • Technical Infrastructure:
    • Platform or Tool: for proposal submission, anonymized review distribution, and scoring.
    • Ensure User Experience (UX) is intuitive and accessible.
  • Training and Support:
    • Develop or license training materials for applicants as reviewers.
    • Consider helpdesk support during the review period.
  • Communication and Engagement:
    • Clearly explain the purpose, process, and benefits to participants.
    • Highlight how reviews are used and how quality is monitored to foster trust.

Challenges and mitigations

1. Potential reviewer bias
Challenge: Reviewers may consciously or unconsciously favor or penalize proposals.
Mitigation: Implement a double-blind review process where both reviewer and applicant identities are anonymized. Provide bias-awareness training as part of the review onboarding.

2. Conflicts of interest in small domains
Challenge: In niche fields, applicants may know each other, creating risks of favoritism or retaliation.
Mitigation: Require conflict of interest declarations, and use automated tools to help detect potential overlaps. Enable applicants to flag conflicts during the assignment process.

3. Transparency of competition
Challenge: Ensuring that the review process remains fair and trustworthy.
Mitigation: Provide clear evaluation criteria, publish aggregated review statistics (e.g., number of reviews per proposal, average scores), and offer post-review debriefings when possible.

4. Fostering motivation for high-quality reviews
Challenge: Applicants may see the task as a burden or strategic opportunity, leading to superficial reviews.
Mitigation: Introduce incentives, such as recognition, certificates, or bonus points for quality reviews. Consider meta-reviews or calibration scores to assess and reward reviewer effort and reliability.

5. Quality control of the peer review
Challenge:Ensuring consistency and fairness across multiple reviewers.
Mitigation: Introduce a review moderation system or involve meta-reviewers who assess the quality of peer reviews. Use rubrics and training materials to standardize expectations.

Evaluating success

How to assess effects/outcomes – Data to collect

  • Results of Shadow Experiments
    • Compare outcomes of distributed peer review with traditional review within the same call.
    • Analyze agreement rates between the two methods (e.g., overlap in selected proposals).
  • Review Quality Metrics
    • Use meta-evaluation forms to assess reviews for accuracy, thoroughness, and constructiveness.
    • Involve expert reviewers or independent panels for spot quality checks.
  • User Feedback and Surveys
    • Collect post-call surveys on:
      • Experience as both reviewers and applicants
      • Perceptions of fairness and transparency
      • Perceived workload
      • Understanding of evaluation criteria
  • Evaluation Timelines
    • Track overall process speed: average time from submission to notification.
    • Compare with timelines of similar calls evaluated by traditional methods.

Relevant resources and literature

Research on Research Institute (RORI), Stafford, T., Pinfield, S., Butters, A., Benson Marshall, M. (2025). RoRI Insights: Applicants as reviewers - a Guide to Distributed Peer Review. Research on Research Institute. Report. https://doi.org/10.6084/m9.figshare.29270534.v1

Pearson. H. (2025). How to speed up peer review: make applicants mark one another. Nature. 2025 Jul;643(8071):313-314. doi: 10.1038/d41586-025-02090-z https:doi.org/10.1038/d41586-025-02090-z

Case examples and literature

The Volkswagen Foundation has been experimenting with distributed peer review for some time. More information can be found at: https://www.volkswagenstiftung.de/en/news/interview/volkswagen-foundation-experiments-new-peer-review-method

The Humboldt Foundation has been experimenting with a linked concept, namely Review Circle. In Review Circles, instead of two separate written reviews per application, a group of six to ten reviewers compare and discuss several applications on a protected digital platform. More information can be found at: https://www.humboldt-foundation.de/en/explore/figures-and-statistics/evaluation/evaluation-of-the-2022-peer-circle-experiment

The Alexander von Humboldt Foundation in Germany is currently experimenting with combined peer review formats, and has already had positive experiences. Specifically, instead of requesting peer reviews for proposals separately, the review could be done combinedly on a digital, interactive platform where researchers exchange their reviews/comments and get to discuss about the quality and innovativeness of proposals. This way, the reviewers could also directly compare the proposals with each other and rank them. https://www.humboldt-foundation.de/en/explore/figures-and-statistics/evaluation/evaluation-of-the-2022-peer-circle-experiment

Iterative assessments, where proposals are submitted and may be accepted, rejected or provided with feedback for the investigators to respond to, before being re-considered for funding are also being investigated with through the indigenous funding space at the Canadian Institutes of Health Research. There was a similar process formerly used in the Randomized Controlled Trials Committees of their old Open Operating Grants Program called UCR (under Continuing Review), where if the committee had simple questions that could make or break a proposal, they could rate the application provisionally based on satisfactory response to the questions. If the application then fell within the funding cut-off, the applicant would receive the question(s) and have 5 business days to respond. If the response was satisfactory they would get funded, if it wasn’t they would be deemed unfundable and would need to re-apply in a future competition.

The University of Antwerp is working on Comparative judgment (e.g. D-PAC), a concept that could be relevant when experimenting with distributive peer review.

More experiments about distributed peer review, aplicants for funding review each other’s proposals can be found from the Research on Research Institute (RoRI) at: https://researchonresearch.org/volkswagen-distributed-peer-review/

Comments/lived examples

https://www.volkswagenstiftung.de/en/news/interview/volkswagen-foundation-experiments-new-peer-review-method Astronomy institutions (e.g. European Southern Observatory) use distributed peer review for awarding telescope observation time

https://doi.eso.org/10.18727/0722-6691/5147