Pick your own reviewers

Applicants recommend peer reviewers for their projects, who are then included in the evaluation process. This approach reduces the burden on evaluation bodies and potentially allows individuals with deep subject knowledge to participate. It can also bring in users of research and non-traditional peer reviewers, helping to diversify the reviewer pool.
Level 0
Challenge - Process Culture
CoARA Commitment 1
CoARA Commitment 2
User - Funders
User - Institutes
User - Academies
User - Research Groups
User - Scientific editors and publishers
Contributor

Experiments in Assessment WG

Last updated

March 10, 2026

WarningObjectives and potential outcome
  • Empower the researcher and research institutes to take ownership of their own assessment
  • Foster transparency of the evaluation process
  • Add potential benefits to the review process (e.g. building collaborative links between evaluators and applicants)
  • Diversifying the reviewer pool along equity and subject matter grounds
  • Broaden who is considered an expert reviewer beyond traditional

Research domains

All domains, not specific for any domain

Context and considerations

Implementation suggestions: - Decide assessment process logistics before start (virtual vs. in person, timings, etc) - Depending on scope of assessment, determine timing and intermediate steps (e.g. feedback to applicants, etc.) - Determine if there are veto rights by certain stakeholders on who is chosen as an evaluator - Doesn’t actually save resources for administration, efforts are shifted to different parts of the process than searching for evaluators - Provide clear guidance and limitations in support of a diversified and inclusive peer reviewer pool - Applicants must suggest more than the needed amount, to allow for rejections - Determine how many of the suggested peer reviewers to add to the actual pool of evaluators - ALTERNATIVE - provide names of who not to contact. - Consider people who are not researchers, but rather users of research or other relevant profiles - from other sectors of society - Community of assessors should reflect the community of applicants - Be open and transparent - make it clear to the community/applicants who is reviewing (depends on scope of evaluation)

Experiments can consider a variety of elements, including: - Who picks the reviewers - Level of assessment (individual, project, research group, institution) - Criteria for picking the reviewers (e.g. geographic, gender, etc..) - Consider people who are not researchers, but rather users of research or other relevant profiles - Definition of “Peer” depends on community of applicants - e.g. early career researchers should be evaluating early career fellowships

Challenges and mitigations

Possible challenges - Maximizing diversity in reviewer choice - not always picking the same people - Potential Conflicts of Interest - need to be very sure - Reviewers are sometimes confused at this process change, need onboarding - Biases due to known/unknown relationships (personal, professional, reputational)

Possible mitigations - Create clear guidelines for how to suggest reviewers (could increase diversity/reduce bias) - Clear communication for the experiment for all stakeholders - Need to double check anyways for conflict of interest - Keep the process, and reviewers should declare - Clear instructions for the evaluators (e.g. bias awareness training) - Legal coverage - make applicants claim that there is no CoI - Consider who suggests the peer reviewer and their relationship/what they stand to gain - make it clear to the community/applicants who is reviewing (depends on scope of evaluation) - Adjusting weight of different reviewers, be clear on the weight in advance

Evaluating success

Diverse elements can be considered to evaluate the success of the experiment, including: - Diversity of the evaluator pool (geographic, gender, new vs old partners, etc.) - Self-assessment feedback from applicants (including off-target effects - does this create a real benefit for the applicants?) - More non-research experts involved in peer review - Diversity of the awardees - More specific knowledge in the field for peer review - Reduced time investment by funders/evaluation admin to find peer reviewers - Science communication outcomes - more will hear about it/have a stake in it - Specific purpose-built (based on objective for the assessment) indicator of the quality of assessment

Relevant resources and literature

This section includes resources, literature, and reports relevant to this specific experimental idea.

Self-selected reviewers are common practice in journal peer review. Although research assessment differs depending on the elements being assessed (individual, project, career, research output, scientific paper, etc.), reviewing the elements that were raised in the early discussions of self-selected peer-reviewers for paper submissions can provide useful insights that may be relevant to consider in other types of assessment. Short articles from the Retraction watch in this regard are available here and here.

Templates from funders and institutions

Case examples and literature

Other resources

Comments/lived examples