Note: These reviewer guidelines were taken from those distributed for CVPR00. David Kriegman and David Forsyth, the CVPR 2000 Program Co-Chairs are the source for this good description of what reviewers should do.

CVPR-2000: Program Committee and referee instructions

Thank you for agreeing to serve on the CVPR-2000 Program committee. You've committed to serve your community in a role that involves a great deal of work and doesn't attract much honour; furthermore, it's almost certain that your decisions will be disputed or disparaged. There are rewards; your decisions will make our conference technically excellent and will shape the development of our field. We'll recognise your service in an appropriate way.

Apart from cheering you up, this document is intended to start the forming of a consensus on a body of reviewing standards that are transparent to authors and practical for reviewers. As a result, it sounds a bit prissy.

Reviewing goals

Overall Goal: we feel that reviewers should review papers with the intention that no paper with merit be excluded from the conference (as opposed to ensuring that no paper without merit being included).

Posters vs. Papers: it is our expectation that most papers will be accepted as posters. Papers that are accepted will default to a poster presentation unless reviewers identify the special features of a paper that make it appropriate for oral presentation.

Corrections: please don't feel you need to correct papers, unless you have the (very rare) experience of encountering a paper that is so important it must be accepted, but must be corrected. The authors might appreciate help, but your ability to offer it may be limited by the need to review all papers in time.

: reviewers have strong feelings about areas; the benefit of being a program commitee member is that you can express these feelings, and shape the development of an area. It is good practice to read your review from the perspective of the authors --- is it hurtful? or patronising? or abusive? or wrong?

PC member duties

We expect from each member of the PC:

Paper categories and reviewing standards

It is uncommon for a conference paper to change the field, and unreasonable to expect every conference paper to do so. Reasonable expectations of a paper are: Notice that clarity is absent from this list, although it is a desirable property. It is unfair to reject a paper purely because it is unclear or difficult, and it could be dangerous to reject a paper because you don't understand it. Together with these general expectations, we see different kinds of papers meeting different kinds of standards.

Theory papers: should offer novel theoretical insights into one or more vision problem. Generally, very few papers in vision are theory papers, because most vision problems are problems of technique rather than theory. For example, a theory paper might offer a completely new view of the overall process of object recognition, and explain why that view is better than current thinking. It is unfair to criticize a theory paper for a lack of experimental results. This means that the burden on the author to show that their theory offers substantial insights is high --- does it allow us to think about a problem in a new way, that might be helpful? does it clarify why some problems are hard? An unattractive feature of theory papers is self-referential problem solving, where the paper merely resolves mathematical questions that arise if one adopts a particular framework.

Technique papers: show how to use, adapt or enhance existing techniques to solve vision problems. A substantial number of vision papers are about technique. For example, one might use linear algebra to do colour constancy. Technique papers that use a technique for the first time should show why the technique is appropriate and useful. More commonly, a technique that is used in vision is adapted or improved. In this case, the paper should identify the improvements, explain their virtues, and show some experimental examples that support the case. It is uncommon to encounter technique papers with substantial experimental verification --- often, the technique is known to work already, or the author can make the case that the broader range of problems that the technique can solve offsets poorer experimental properties. Very often readers value technique papers not for what the technique does in the paper, but what they can do with it.

Application papers: identify a problem that can be solved using vision, show that it is worth solving, and solve it. Typically, an application paper may use strategies that are hard to justify using broad principles, but work. The paper should demonstrate that there is sufficient experimental evidence to believe that the application really works, and that the application is worthwhile, in the sense that it has users. The important difference between application and experimental papers is that application papers solve other peoples problems, and experimental papers address what are primarily vision issues. If there is sufficient evidence a system works, it is unfair to criticize the authors for using dubious or unprincipled techniques in building it. Generally a paper that identifies an application but does not solve it is unattractive, but there may be acceptable papers of this form.

Experimental papers: show a body of experimental evidence either to support or to discourage the use of a technique or a system. An experimental paper should indicate why the experiment is worth performing or what difficulties will be resolved by knowing the result of the experiment. It should show evidence of a substantial experiment, analysed carefully; ideally, the experimental design will be discussed as well. Careful analysis includes showing an overall statistical description, as well as explaining and illustrating special cases or non-obvious features of the experiment. There are numerous difficulties in performing experiments in vision (e.g., the scale of the experiment, difficulties in performing controls, assessing the quality of results, obtaining appropriate experimental materials, etc.), and reviewers will recognise that some experiments are difficult to do, or to analyse precisely. Running a system on several images is seldom a meaningful experiment.