Tips for being a Good Visualization Paper Reviewer

This past year I was papers co-chair for the IEEE VAST (Visual Analytics) Conference, and it gave me the opportunity to read lots of paper reviews again. I had been papers co-chair for VAST once before, in 2009, and twice for the IEEE InfoVis Conference shortly before that. Additionally, I’ve been a (simple) reviewer for hundreds of papers since starting as a professor in 1989, and my students, colleagues, and I have written many papers that have received their own sets of reviews. Add it all up, and I’ve likely read over a thousand reviews in my career.

So what makes a good review, especially for visualization and HCI-related research papers? Reading so many VAST reviews this spring got me thinking about that topic, and I started jotting down some ideas about particular issues I observed. Eventually, I had a couple pages of notes that have served as the motivation of this article.

I have to say that I was inspired to think about this by Niklas Elmqvist’s great article on the same topic. I found myself noting some further points about the specific contents of reviews, however, so view these as additional considerations on top of the advice Niklas passed along.

I decided to keep the list simple and make it just a bullet list of different points. The starting ones are more specific recommendations, while the latter few turn a little more philosophical about visualization research on the whole. OK, here they are:

• Suggest removals to accompany additions – If you find yourself writing a review and suggesting that the authors add further related work, expand the discussion of topic X, insert additional figures, or simply add any other new material AND if the submitted paper is already at the maximum length, then you also need to suggest which material the authors should remove. Most venues have page limits. If you’re suggesting another page of new content be added, then which material should be removed to make room for it? Help the authors with suggestions about that. It’s entirely possible that authors could follow reviewers’ directions to add content, but in order to create the space, they remove other important or valuable sections. Deciding which material to take out is often one of the most difficult aspects of revising a paper. Point out to the authors sections of their paper that were redundant, not helpful, or simply not that interesting. That is review feedback they actually will appreciate receiving.

• Be specific – Don’t write reviews loaded with generalities. Be specific! Don’t simply say that “This paper is difficult to read/follow” or “Important references are missing.” If the paper is difficult to follow, explain which section(s) caused particular difficulties. What didn’t you understand? I know that sometimes it can be difficult to identify particular problematic sections, but do your best, even if it is many sections of the paper. Similarly, don’t just note that key references are missing – tell the authors which ones. You don’t need to provide the full citation if the title, author, and venue make it clear which papers you mean, but do provide enough explanation so that authors can determine which article(s) you believe have been overlooked. Further, provide a quick explanation about why a paper is relevant if that may not be clear. Finally, a particular favorite (not!) of mine, “This paper lacks novelty.” No, don’t just leave it at that. If a paper lacks novelty, then presumably earlier papers and research projects exist that do similar things. What are they? How are they similar (if it isn’t obvious)? Explain it to the authors. The unsupported “lacking novelty” comment seems to be a particular way for reviewers to hide out and is an element of a lazy review.

• Don’t reject a paper for missing related work – I’ve seen reviewers kill papers because the authors failed to include certain pieces of related work. However, this is one of the easiest things to fix in a paper upon revision. Politely point out to the authors what they missed (see the previous item), but don’t sink a paper because of that. Now, in some cases a paper’s research contribution revolves around the claim of introducing a new idea or technique, and thus if the authors were unaware of similar prior work, that can be a major problem. However, I haven’t found that to be the case too often in practice. We all build in some way on the work of earlier researchers. Good reviewers help authors to properly situate their contribution in the existing body of research, but don’t overly punish them for not being aware of some earlier work.

Fight the urge to think it’s all like your work – When reviewers have done prior research in the area of a new paper, it often seems easy for them to think that everything is just like their work. As a Papers Chair, I’ve read quite a few reviews where a reviewer mentions their own prior work in the area as being highly relevant, but I honestly couldn’t see the connection. This is a type of bias we all have as human beings. Simply be aware of it and be careful to be fair and honest to those whose work you’re critiquing.

• Don’t get locked into paper types – Paper submissions to the IEEE VIS Conference must designate which of five paper types they are: Model, Domain study, Technique, System, and Evaluation. Tamara Munzner’s “Process and Pitfalls” paper describing the five paper types and explaining the key components of each can be valuable assistance to authors. There’s no rule that a paper must only be one type, however. Recently, I’ve seen reviewers pigeon-hole a paper by its type and list out a set of requirements for papers of that type. Sometimes this does a disservice to paper, I feel. It is possible to have innovative, effective papers that are hybrids of multiple types. I’ve observed very nice technique/system and technique/domain study papers over the years. The key point here is to be open to different styles of papers. It’s not necessary that a paper be only one type (even if the PCS paper submission system forces the author(s) into making only one selection).

• Spend more review time on the middle – For papers that you give a score around a 3 (on a 1-5 scoring system), spend a little more time and explain your thoughts even further than normal. This will help the Papers Chairs and/or Program Committee when considering the large pile of papers having similar middling scores at the end. By all means, if you really liked a paper and gave it a high score, do explain why, but it’s not quite so crucial to elaborate on every nuance. Similarly, if a paper clearly has many problems and won’t be accepted, extra review time isn’t quite so crucial. For papers that may be “on the fence”, however, carefully and clearly explaining the strengths and limitations of those papers can be very beneficial to the people above you making the final decisions on acceptance.

• Defend papers you like – If you’ve reviewed a paper and you feel it makes a worthwhile contribution, give it a good score and defend your point of view in subsequent discussions with the other reviewers. Particularly if your view is a minority opinion, you can feel pressure to be like the others and not to be seen as being too “easy”. Stand up for your point of view. There simply aren’t that many papers receiving strong, positive reviews. When you find one, go to bat for it and explain all the good things you saw in it.

• Don’t require a user study – OK, here’s one that’s going to ruffle a few feathers. There is virtually nothing that I dislike more in a review than reading, “The paper has no user study, thus I’m not able to evaluate its quality/utility.” Simply put, that’s hogwash. You have been asked to be a reviewer for this prestigious conference or journal, so presumably that means you have good knowledge about this research area. You’ve read about the project in the paper, so judge its quality and utility. Would a user study, which is often small, over-simplified, and assessing relatively unimportant aspects of a system really convince you of its quality? If so, then I think you need higher standards. Now, a paper should convince the reader that its work is an innovative contribution and/or does provide utility. But there are many ways to do that, and I feel others are often better than (simple) user studies. My 2014 BELIV Workshop article argues that visualization papers can do a better job explaining utility and value to readers through mechanisms such as example scenarios of use and case studies. Unfortunately, user studies on visualization systems often require participants to perform simple tasks that don’t adequately show a system’s value and that easily could be performed without visualization. Of course, there are certain types of papers that absolutely do require user studies. For example, if authors introduce a new visualization technique for a particular type of data and they claim that this technique is better than an existing one, then this claim absolutely should be tested. Relatively few papers of that style are submitted, however.

• Be open-minded to new work in your area – I’ve noticed reviewers who have done prior research and published articles in an area who then act like gatekeepers to that area. Similar to the “Thinking it’s all like your work” item above, this issue concerns reviewers whose past work on a topic seems to close their mind to new approaches and ideas. (It’s even led me, as a Papers Chair, to give less credence to an “expert” review because I felt like the individual was not considering a paper objectively.) I suspect there may be a bit of human nature at work here again. New approaches to a problem might seem to diminish the attention on past work. I’ve observed well-established researchers who seem to act as if they are “defending the turf” of a topic area – Nothing is good enough for them; no new approach is worthwhile. Well, don’t be one of those close-minded people. Welcome new ideas to a topic. Those ideas actually might not be that similar to yours if you think more objectively. In the past, I have sometimes thought that we might have a much more interesting program at our conferences if all papers received reviews from non-experts on a topic. People without preexisting biases usually seem to better identify the interesting and exciting projects. Of course, views of experts are still important because they bring the detailed knowledge needed sometimes to point out potential subtle nuances and errors.

 

Hopefully, these points identify a number of practical ways for researchers to become better reviewers. I’ve observed each of these problems occur over and over in the conferences that I’ve chaired. My hope is that this column will lead us all to reflect on our practices and habits when reviewing submitted papers. I personally am an advocate that reviews should be shared and published. At a minimum, faculty advisors can share their reviews with their students to help them learn about the process. It can become another important component of our training future researchers. Furthermore, I’d be in favor of publishing all the reviews of accepted papers to promote more transparency about the review process and to facilitate a greater discourse about the research involved in each article.

I look forward to hearing your views about these items. Do any strike a particular chord with you?  Disagree about some?  Please feel free to leave a comment and continue the discussion on this important topic.

Author: John Stasko

A professor and data visualization researcher in the School of Interactive Computing at Georgia Tech.

6 thoughts on “Tips for being a Good Visualization Paper Reviewer”

  1. John,

    My first comment would be (besides the fact that I am grateful that someone of your stature pulls such an [alarm] bell): Isn’t it odd that this article got so far no reactions? That makes me think, and not in positive ways.

    BTW, I don’t know if you know, but your blog and Elmkvist’s on the same topic have been indicated as valuable guidelines for the EuroVis’17 full paper reviews. I am personally thought skeptical on how much impact they will have. After all, what (I think) we all, as a research community, fail to realize, is that we are in a zero-sum game: We submit to conference X, which has a kind of fixed number of accepted papers. And then we review for conference X. The economic incentive of rejecting all the papers one gets to review for X is simply too big. This is called conflict of interests in other fields, and duly treated — since 100 years or so. So, why are we persisting in the (in my words) naivety that peer review, as we do it now, makes sense?

    Regards,
    Alex Telea

    Like

    1. Thanks Alex. I noticed the lack of replies too, though I think that could mean many things. A bit of a conversation did ensue on my Facebook page’s post about the article.

      I agree with you about the zero-sum game point. I feel it is easy for a “If all the other papers sink a little, then mine will naturally rise” mindset, whether consciously or subconsciously. I would hope most people would not fall into that trap, but again, it’s likely just human nature for such a mindset to emerge. In National Science Foundation proposals, if you have submitted to a program in a particular year, then you cannot serve as a reviewer. I think that makes a lot of sense. Of course, our issue with our conferences is that this then would remove the majority of qualified people as reviewers since they are also submitting papers to the conferences.

      Like

  2. Dear John, thanks for very interesting topic. I wished also people in the previous years also had a chance to read this blog. That way probably some of our papers could get better comments!
    I try to follow your advice not only for vis but also for other venues.
    I agree with Alex somehow! I realized that some people know what some other people are working on now and what they will probably submit for the vis conferences. Hence, sometimes the double blinded reviews also don’t make sense!

    Ronak Etemadpour

    Like

  3. Hey, Prof. Stasko. Thanks a lot for your excellent article! I think your tips on reviewing a paper are really helpful, especially for young researchers in the field of visualization and HCI. I am quite glad to see that the chairs of VAST17 have forwarded to all reviewers your article and the other famous article on this topic by Prof. Elmqvist. Just another suggestion, maybe a panel discussion can be held in this year’s IEEE VIS conference, which would at least benefit the whole community to some degree.

    Like

Leave a comment