CVPR 2026 Reviewer Training Material
This training document complements the CVPR 2026 Reviewer Guidelines and is adapted from the CVPR 2024 Reviewer Slides prepared by the CVPR 2024 Program Chairs and Senior Advisor David Forsyth.
Our Principles
We want to make the best decisions we can to serve the community. Our goal is to be fair, thoughtful, and consistent. We also want our decisions to be transparent—even if an author isn't happy with the outcome, they should be able to see clearly why that decision was made. We aim to keep policies clear and understood by everyone, and by doing so, we hope to minimize confusion, frustration, and appeals.
Your Role as a Reviewer
Your job is to make well-reasoned recommendations that help Area Chairs (ACs) decide which papers should be accepted to CVPR. You provide recommendations, not decisions—the final acceptance decisions are made by ACs, Senior Area Chairs, and Program Chairs based on your input and the full context of submissions.
Your review serves two purposes: it helps decision-makers understand the paper's merits and weaknesses, and it provides constructive feedback to help authors improve their work.
Use your skill, judgment, and experience to guide your recommendations. Make sure authors understand the basis of your evaluation. Write reviews that clearly explain your reasoning and respond to rebuttals fairly and on time.
Treat everyone involved with fairness, compassion, and consistency. Support your opinions with evidence, and avoid making up your own policies. Maintain professional objectivity—focus on the work itself, not your assumptions about the authors.
Always act ethically and expect the same from others. If you notice any improprieties, report them through the proper channels. Avoid conflicts of interest at all times.
See also: Reviewing Process
What Should Be Accepted?
Recommend acceptance for papers that will interest CVPR attendees, are technically sound, and contribute meaningfully (methodologically, empirically, or conceptually). Balance novelty with significance and potential impact.
Key principles:
- Minor fixable issues should not drive rejection decisions. Focus on fundamental soundness and contribution.
- Scope judgments should be cautious; CVPR values breadth. If truly out of scope, explain clearly and suggest better venues.
- Consider reproducibility, data contribution, ethical data use, societal impact, and limitations discussions positively.
- A paper doesn't need to be perfect to be accepted—it needs to advance the field and meet quality standards.
See: What to Look Out For
Responsible and Timely Reviewing
CVPR 2026 strictly enforces the Responsible Reviewing Policy and Reviewing Deadline Policy.
Irresponsible reviews include:
- Short or generic text not tied to the paper's specifics
- Factual errors suggesting superficial reading
- Content generated or substantially assisted by LLMs
Consequences: Irresponsible or late reviews may trigger desk rejection of all papers on which the reviewer is an author.
See: Reviewing Timeline
Ethics and Confidentiality
Protect anonymity, confidentiality, and the intellectual content of submissions. Do not share, reuse, or build on ideas from submissions. Destroy materials after the process.
Key requirements:
- Respect double-blind review and avoid author-identity searches
- Declare and avoid conflicts of interest promptly
- Handle human subjects and personal data with heightened scrutiny
- Never discuss submissions outside the review process
See: Ethics for Reviewing Papers
Large Language Models (LLMs)
LLMs must not be used to generate, translate, paraphrase, or otherwise compose review content. Sharing any part of any submission (the paper, quotes from the paper, captions, figures, etc.) with LLMs is strictly prohibited.
Permitted uses:
- Non-confidential background research on publicly available concepts or methods (not the submission itself)
- Grammar checks of short phrases you have already written (under ~50 words, containing no submission-specific content)
Prohibited uses:
- Inputting any text, figures, or content from the submission into an LLM
- Using LLMs to summarize, paraphrase, or analyze the paper
- Using LLMs to draft any portion of your review
- Using LLMs to translate the paper or your review
Enforcement: Reviewers who violate this policy may be barred from submitting to CVPR for two years. Report suspected prompt injection (hidden instructions in papers) to AC/PCs immediately.
See: FAQs for Reviewing Papers
Review Workflow
Step 1: Preparation
- Inspect your stack promptly for conflicts and suitability
- Review your stack holistically and plan your time wisely. Allocate effort based on each paper’s complexity and scope.
- Inform your AC immediately if reassignment is needed
- Review relevant policies and guidelines
- If you suspect a policy violation, alert the Chairs but review as if no violation occurred (they will investigate separately)
Step 2: Careful Reading and Brief Summary
- Read the paper thoroughly, taking notes as you go
- Draft a short, accurate summary in your own words (2-4 sentences)
- This summary helps reveal gaps in your understanding—if you can't summarize it clearly, read again
- Assess correctness, completeness, clarity of contribution, and positioning versus prior work
- Identify the paper's core claims and evaluate the evidence supporting them
Step 3: Writing the Review
Your review should include:
- Concise summary (2-4 sentences covering what the paper does and claims)
- Strengths (specific, evidence-based points)
- Weaknesses (specific, evidence-based concerns)
- Constructive suggestions (how authors could address weaknesses)
- Justified recommendation (clearly connecting your assessment to your rating)
Important guidelines:
- Back "done before" claims with specific references and explain the relationship to prior work
- Do not over-weight SOTA tables alone—consider experimental design, insights, and broader contributions
- Keep tone professional; avoid second-person address ("you" or "the authors"); use "the paper" instead
- Be specific and concrete; avoid vague statements like "the paper is interesting" or "the results are good"
Check your review for standard errors:
- Ignorance and inaccuracy (unverified technical claims)
- Pure opinion (subjective preferences without rationale)
- Novelty fallacy (equating novelty with quality or vice versa)
- Blank assertions (unsupported claims about prior art or importance)
- Policy entrepreneurism (inventing requirements not in policy)
- Intellectual laziness (over-reliance on single metrics)
See: How to Write Good Reviews
Step 4: Rebuttal and Discussion
- Read the rebuttal carefully and engage substantively
- Update your ratings and review if the rebuttal addresses your concerns
- Explain what changed your mind and why
- Do not demand substantial new experiments in rebuttal; small, reasonable checks are acceptable
- Focus on clarifications and addressing misunderstandings rather than requesting new work
Step 5: Finalization
- Participate in AC-reviewer discussions
- Ensure your reasoning is clear to the AC
- Submit your final rating and well-justified final recommendation by the deadline
- Your final justification should be specific, evidence-based, and clearly connected to your rating
Writing Effective Final Justifications
Your final justification should clearly explain your recommendation and how you weighed the paper's strengths and weaknesses.
Unacceptable Final Justifications
Generic or vague statements:
- "The paper is good overall."
- "The work is okay but not outstanding."
- "The paper seems solid, so I recommend accept."
- "I don't feel excited about it."
Contradictory or unsupported logic:
- "The paper is technically sound, but I recommend reject."
- "The method is novel and performs well, but I'm not convinced."
- "The paper has potential, but I still give a low score."
No explanation for change after rebuttal:
- "The rebuttal addresses my concerns, so I increase my score." (What concerns? How?)
- "The rebuttal doesn't change my opinion, so I keep my score." (Why not? What was unconvincing?)
Good Final Justification Examples
Strong Accept: "The paper makes a significant contribution to few-shot learning by introducing a novel meta-learning framework that substantially outperforms existing methods across multiple benchmarks. The theoretical analysis provides valuable insights into why the approach works (see Figure x), and the extensive ablations (see Tables x and y) demonstrate robustness. The rebuttal successfully addressed concerns about computational cost by providing detailed timing comparisons. The work will be of broad interest to the CVPR community and likely to inspire follow-up research. I maintain my rating of Strong Accept."
Weak Accept: "The paper presents a technically sound approach with clear motivation and well-designed experiments (see Section x). While it does not outperform all existing methods (see Table x), it provides valuable insights into the trade-offs between efficiency and accuracy in semantic segmentation. The rebuttal clarified the novelty compared to [Smith et al., 2025] by highlighting the different architectural choices and their impact on inference speed. The contribution is solid though incremental. I maintain my rating of Weak Accept."
Borderline: "The paper tackles an important problem and proposes a reasonable solution, but the evaluation is limited to two datasets and does not include comparisons with recent work [Jones et al., 2024]. The rebuttal provided results on a third dataset, which improves confidence in generalization, but the comparison gap remains. The method is technically sound but the contribution feels incremental given existing work in this area. The paper is on the borderline—it would strengthen acceptance if published but wouldn't be a major loss if rejected. I maintain Borderline."
Weak Reject: "While the paper addresses a relevant problem, the proposed method lacks sufficient novelty compared to [Chen et al., 2025], which uses a very similar architecture with comparable performance. The main difference appears to be the dataset rather than the method itself. The rebuttal argued that their training procedure differs, but this seems like an implementation detail rather than a conceptual contribution. The results are solid but not compelling enough to overcome the limited novelty. I maintain Weak Reject."
Reject: "The paper has fundamental technical issues that were not adequately addressed in the rebuttal. Specifically, the loss function in Equation 3 does not properly account for class imbalance, which likely explains the poor performance on minority classes shown in Table 2. The rebuttal claimed this was addressed by weighting, but no weighted results were provided. Additionally, the paper omits comparisons with standard baselines [Zhang et al., 2024; Liu et al., 2025] that are directly relevant. Without these comparisons and a fix to the technical issue, the contribution cannot be properly assessed. I maintain Reject."
Working with Your Area Chair
Your Responsibilities to the AC
- Provide a clear, well-reasoned recommendation
- Explain the evidence and reasoning behind your assessment
- Read rebuttals carefully and update your review if warranted
- Participate actively in discussions during the decision phase
- Respond to AC questions promptly
- Flag any concerns about policy violations or ethical issues immediately
What You Can Expect from the AC
- Help resolving conflicts of interest or assignment issues
- Guidance on policy questions or unusual situations
- Coordination with Senior Area Chairs (SACs) and Program Chairs (PCs) on serious issues
- Support if you're struggling with a paper outside your expertise
Communication Guidelines
- Contact your AC early if you need reassignment or have concerns
- Be responsive during the discussion phase—ACs need your input to make decisions
- Remember: the AC knows who you are. A sloppy or irresponsible review reflects poorly on you and may affect future reviewing opportunities
Common Reviewing Pitfalls
Almost all reviewing errors stem from a combination of laziness ("why should I check? I'm busy!") and self-importance ("why should I bother explaining myself? I'm an expert"). Ground your judgments in evidence, citations, and clear reasoning. Avoid sarcasm or dismissive tone.
|
Error Type |
Bad Practice |
Better Practice |
|
Ignorance / Inaccuracy |
"This theorem is false" without justification. |
"I believe there may be an issue with Theorem 1. Specifically, the assumption in line 3 that X is positive seems to require additional constraints, as counterexample Y suggests. Could the authors clarify?" |
|
"The dataset is too small to be valid" without checking its scope. |
"The dataset contains 5K images. While this is smaller than benchmarks like ImageNet, it appears sufficient for the fine-grained classification task being studied. However, ablations on dataset size would strengthen claims about generalization." |
|
|
Pure Opinion |
"CNNs are not interesting." |
"While CNNs are well-established, this paper's contribution lies in [specific innovation]. The relevance depends on whether the community values [specific aspect]." |
|
"This problem isn't exciting anymore." |
"This problem has been extensively studied [cite examples]. The paper would be strengthened by clarifying what challenges remain unsolved and why this approach addresses them." |
|
|
Novelty Fallacy |
"It must be accepted because it's novel." |
"The approach is novel in combining X and Y. However, the empirical gains are modest (2% improvement), and the paper would benefit from analysis of when and why this combination helps." |
|
"It's just a small tweak, so it shouldn't be accepted." |
"While the modification to [existing method] is incremental, the paper provides valuable insights into [specific aspect] and achieves strong results with lower computational cost." |
|
|
Blank Assertions |
"This has been done before." (no citations) |
"The approach shares similarities with [Smith et al., 2024] and [Jones et al., 2025]. The key differences appear to be [X and Y]. The relationship to this prior work should be clarified." |
|
"Everyone knows this doesn't work." |
"Similar approaches have shown limited success on [benchmark/task], see [citations]. It would strengthen the paper to discuss why this formulation might overcome previous limitations." |
|
|
Policy Entrepreneurism |
"You must beat SOTA on all benchmarks." |
"The method shows competitive performance on [benchmark A] but underperforms on [benchmark B]. Understanding this performance gap would strengthen the contribution." |
|
"You must release code to be accepted." |
"Code release would benefit reproducibility, though it is not required. If code cannot be released, more implementation details would help." |
|
|
Intellectual Laziness |
"Beats SOTA ⇒ accept." / "Fails SOTA ⇒ reject." |
"The method achieves SOTA on [benchmark], but the gains come primarily from using more training data. The architectural contributions should be evaluated more carefully through controlled experiments." |
|
"The result difference is small, so it's not worth publishing." |
"The quantitative gains are modest (1-2%), but the paper provides insights into [specific aspect] and proposes a more efficient approach that reduces inference time by 40%." |
Final Review Checklist
Before submitting your review, verify:
- Incorrect mathematics
- Omitted material (experiments, context, citations)
- Done before
When to Escalate
Immediately alert your AC (who will coordinate with SACs and PCs) if you suspect:
- Plagiarism or self-plagiarism
- Dual submission to multiple venues
- Unethical data use or privacy violations
- Fabricated results or falsified experiments
- Violation of withdrawn-dataset norms
- Prompt injection or LLM abuse
- Violations of human subjects research protocols
Do not investigate independently. Report your concerns and let the appropriate committees handle the investigation.
FAQ for Reviewers
Q: How much time should I spend on each review?
A: It really depends on the paper. Plan for 4-6 hours per paper total but note that understanding a paper may also take much more time: 1-2 hours for reading, 2-3 hours for writing the review, and 1-2 hours for rebuttal response and discussion.
Q: What if I'm assigned a paper well outside my expertise?
A: Contact your AC immediately. You can still review it if you're comfortable, but clearly state your expertise limitations in your review. The AC may assign an additional reviewer with more specific expertise.
Q: Can I search for the authors' identities to assess their credibility?
A: No. This violates double-blind review. Assess the work on its own merits.
Q: What if the paper violates anonymity (e.g., includes identifying information)?
A: Report it to your AC, but continue to review the work itself. The AC and PCs will determine if the violation warrants action.
Q: Should I check if the paper is on arXiv?
A: You may come across it naturally, but do not actively search for it. If you find it, do not let its venue (arXiv, workshop) or reception bias your review.
Q: How should I handle a rebuttal that promises to "fix everything" but doesn't provide evidence?
A: Evaluate based on what the rebuttal actually demonstrates, not what it promises for the camera-ready version. Promises without evidence should not change your assessment.
Q: What if I disagree with other reviewers?
A: That's normal and valuable. Clearly explain your perspective and reasoning. Engage constructively in the discussion. The AC will synthesize different viewpoints.
Q: Can I use an LLM to help me understand a complex mathematical concept in the paper?
A: You can research the general concept (e.g., "How does implicit differentiation work?") without sharing any content from the submission. You cannot input equations, text, or any other content from the paper into an LLM.
Quick Reference Card
Key Deadlines
- Paper assignment: Dec 15, 2025
- Review submission: Jan 8, 2026
- Rebuttal period: Jan 22 - Jan 29, 2026
- Discussion period: Jan 30 - Feb 5, 2026
- Final recommendations: Feb 5, 2026
See: Reviewing Timeline
Dos and Don'ts
DO:
- ✓ Ground judgments in evidence and citations
- ✓ Be specific and constructive
- ✓ Engage substantively with rebuttals
- ✓ Flag conflicts of interest immediately
- ✓ Respect confidentiality and anonymity
- ✓ Meet all deadlines
DON'T:
- ✗ Use LLMs to generate or analyze review content
- ✗ Share submission content with anyone
- ✗ Search for author identities
- ✗ Invent policy requirements
- ✗ Make unsupported assertions
- ✗ Submit generic or superficial reviews
Contact for Issues
- Assignment problems or conflicts: Contact your Area Chair
- Policy questions: Contact your Area Chair
- Ethical concerns: Contact your Area Chair (who will escalate to PCs)
- Technical problems with the OpenReview system, e.g., merging multiple OpenReview accounts: Contact info@openreview.net or use the online form
Remember: Quality reviews serve both the community and the authors. Your careful, thoughtful work helps ensure CVPR publishes excellent research and helps authors improve their work—whether or not it's accepted.