Impressions from EuroVis ’17

I recently returned from EuroVis ’17 in Barcelona, Spain. The conference was held at the Universitat Politècnica de Catalunya (UPC) which is close to Camp Nou, the home stadium for Barcelona’s famous soccer team, in the suburbs outside the city center. It is a pleasant and relatively quiet area of the city compared to the bustling La Rambla, Gothic quarter, and beachfront. It was my first time ever in Barcelona, and I had heard so many great things about the city, so I was eager to visit.

EuroVis is similar in scope to IEEE VIS, the but the three main research areas of information visualization, scientific visualization, and visual analytics are woven together into one program as opposed to the three conferences you see at VIS.  The conference is much smaller than VIS — this year just over 300 people attended. Typically, at any time of the meeting, there are about three sessions occurring in parallel. Beyond regular papers, EuroVis hosts the STAR (State-of-the-Art Reports) presentations as well. Think of them as in-depth surveys of specific subareas of visualization. These reports now appear as papers in the journal Computer Graphics Forum, as do the full research papers in EuroVis.

The conference received 170 paper submissions this year and 46 (27%) were accepted for presentation. Of the traditional five visualization paper types, “Algorithm” led with 74 submissions, followed by “Design study”-52, “Evaluation”-20, “Theory”-13, and “System”-11.  The Algorithm and Design Study areas also had the highest acceptance percentages at 32% and 27%, respectively.

barca3

In addition to full papers, EuroVis takes short paper (four pages of content plus a page of references) submissions, typically for work that is newer and still developing. This year the conference received 64 short paper submissions and accepted 30. Each of these papers is published as an archived conference paper and it receives a 15-minute talk slot at the conference, so researchers definitely should consider this track in the future. The conference also accepted 35 posters for presentation during the week.

If I had to think of one word to describe the conference this year, it would be “Hot”. No, by that I don’t mean that the papers were dynamic and sizzling, although there were many good presentations. I’m simply referring to the temperature!  Every day the high temperature was close to 90º F, and there wasn’t a cloud in the sky the whole week.  Typically, the most valuable commodity at our conferences is good wireless service. Instead, this year it was air conditioning and shade. But hey, I’ll take that anytime over clouds and rain. Just think of it as good practice for VIS this fall in Phoenix.

The conference began with a timely and fascinating keynote talk by Fernanda Viégas and Martin Wattenberg of Google. They discussed many ways that machine learning and visualization are connecting and benefiting each other. Martin and Fernanda showed a number of examples, both from their work and others, of how visualization can help people better understand what is going on (beyond the black box, so to speak) in machine learning. Their talk was complemented by Helwig Hauser‘s closing capstone that examined how visualization is moving onto larger and larger data sets. Up front, he pondered what problems our community has “solved” in the last 25 years. While it may be difficult to think of many, he rightfully also asked when is a problem really ever “solved”? Developing “sufficient” solutions to a bevy of problems simply may be good enough and may be an indicator of good progress. He provided many examples where visualization has done just that.

I saw many nice presentations at the conference and was trying to come up with a theme or two that emerged, but I had a tough time doing so. Perhaps one broad theme I observed was many papers dealing with the HCI aspects of visualization. Topics ranging from evaluation to interaction to storytelling all seemed to have a strong presence this year. Another nice set of papers concerned text and document visualization as well.

barca5

EuroVis traditionally hosts a nice conference dinner on Thursday evening.This year it was at a restaurant on Montjuic, a mountain (actually more of a hill) on the southwest side of the city. The restaurant’s deck afforded a beautiful view down onto the city. The conference organizers also graciously sponsored a guided tour of the famous Sagrada Familia basilica in downtown Barcelona on Wednesday evening. The church is simply stunning both inside and out, and has become an iconic landmark for the city.

One of my favorite aspects of EuroVis is that the conference provides lunch for attendees there at the conference site. Not having to trudge offsite to a restaurant simply gives more time to sit and talk with fellow attendees, old friends, and new acquaintances. The smaller size of EuroVis compared to VIS also makes it easier to find colleagues. All these things combine to provide a little more relaxed lunchtime. I think my lunch conversations were my favorite aspect of the conference this year. It was great hearing what so many friends are working on currently.

In a lucky coincidence, my home university, Georgia Tech, participates in a cooperative study-abroad program with UPC that hosted EuroVis.  Our faculty spend the summer there and teach our courses to our own students who also travel there for the term. My fellow Interactive Computing faculty member and good friend Mark Guzdial was literally teaching classes in the same buildings in which EuroVis was occurring. He even was able to drop in and hear my presentation at the conference. IC PhD student Barbara Ericson is teaching the undergraduate infovis class there this summer too. She asked me about giving a guest lecture while there, but I figured that I’d take a break from the teaching.  :^)

If you haven’t submitted a paper to or attended EuroVis yet, I strongly encourage you to do so. I hadn’t attended until about five years ago, but now I try to make it back as often as I can. The paper quality is excellent and it’s usually hosted at a beautiful European city. Next year’s conference is in Brno, the second largest city in the Czech Republic. (With VIS ’18 in Berlin, apparently they didn’t take my suggestion that EuroVis should be in New Orleans, LA.) Just be on the watch out for dragons that look like alligators.

barca2

Tips for being a Good Visualization Paper Reviewer

This past year I was papers co-chair for the IEEE VAST (Visual Analytics) Conference, and it gave me the opportunity to read lots of paper reviews again. I had been papers co-chair for VAST once before, in 2009, and twice for the IEEE InfoVis Conference shortly before that. Additionally, I’ve been a (simple) reviewer for hundreds of papers since starting as a professor in 1989, and my students, colleagues, and I have written many papers that have received their own sets of reviews. Add it all up, and I’ve likely read over a thousand reviews in my career.

So what makes a good review, especially for visualization and HCI-related research papers? Reading so many VAST reviews this spring got me thinking about that topic, and I started jotting down some ideas about particular issues I observed. Eventually, I had a couple pages of notes that have served as the motivation of this article.

I have to say that I was inspired to think about this by Niklas Elmqvist’s great article on the same topic. I found myself noting some further points about the specific contents of reviews, however, so view these as additional considerations on top of the advice Niklas passed along.

I decided to keep the list simple and make it just a bullet list of different points. The starting ones are more specific recommendations, while the latter few turn a little more philosophical about visualization research on the whole. OK, here they are:

• Suggest removals to accompany additions – If you find yourself writing a review and suggesting that the authors add further related work, expand the discussion of topic X, insert additional figures, or simply add any other new material AND if the submitted paper is already at the maximum length, then you also need to suggest which material the authors should remove. Most venues have page limits. If you’re suggesting another page of new content be added, then which material should be removed to make room for it? Help the authors with suggestions about that. It’s entirely possible that authors could follow reviewers’ directions to add content, but in order to create the space, they remove other important or valuable sections. Deciding which material to take out is often one of the most difficult aspects of revising a paper. Point out to the authors sections of their paper that were redundant, not helpful, or simply not that interesting. That is review feedback they actually will appreciate receiving.

• Be specific – Don’t write reviews loaded with generalities. Be specific! Don’t simply say that “This paper is difficult to read/follow” or “Important references are missing.” If the paper is difficult to follow, explain which section(s) caused particular difficulties. What didn’t you understand? I know that sometimes it can be difficult to identify particular problematic sections, but do your best, even if it is many sections of the paper. Similarly, don’t just note that key references are missing – tell the authors which ones. You don’t need to provide the full citation if the title, author, and venue make it clear which papers you mean, but do provide enough explanation so that authors can determine which article(s) you believe have been overlooked. Further, provide a quick explanation about why a paper is relevant if that may not be clear. Finally, a particular favorite (not!) of mine, “This paper lacks novelty.” No, don’t just leave it at that. If a paper lacks novelty, then presumably earlier papers and research projects exist that do similar things. What are they? How are they similar (if it isn’t obvious)? Explain it to the authors. The unsupported “lacking novelty” comment seems to be a particular way for reviewers to hide out and is an element of a lazy review.

• Don’t reject a paper for missing related work – I’ve seen reviewers kill papers because the authors failed to include certain pieces of related work. However, this is one of the easiest things to fix in a paper upon revision. Politely point out to the authors what they missed (see the previous item), but don’t sink a paper because of that. Now, in some cases a paper’s research contribution revolves around the claim of introducing a new idea or technique, and thus if the authors were unaware of similar prior work, that can be a major problem. However, I haven’t found that to be the case too often in practice. We all build in some way on the work of earlier researchers. Good reviewers help authors to properly situate their contribution in the existing body of research, but don’t overly punish them for not being aware of some earlier work.

Fight the urge to think it’s all like your work – When reviewers have done prior research in the area of a new paper, it often seems easy for them to think that everything is just like their work. As a Papers Chair, I’ve read quite a few reviews where a reviewer mentions their own prior work in the area as being highly relevant, but I honestly couldn’t see the connection. This is a type of bias we all have as human beings. Simply be aware of it and be careful to be fair and honest to those whose work you’re critiquing.

• Don’t get locked into paper types – Paper submissions to the IEEE VIS Conference must designate which of five paper types they are: Model, Domain study, Technique, System, and Evaluation. Tamara Munzner’s “Process and Pitfalls” paper describing the five paper types and explaining the key components of each can be valuable assistance to authors. There’s no rule that a paper must only be one type, however. Recently, I’ve seen reviewers pigeon-hole a paper by its type and list out a set of requirements for papers of that type. Sometimes this does a disservice to paper, I feel. It is possible to have innovative, effective papers that are hybrids of multiple types. I’ve observed very nice technique/system and technique/domain study papers over the years. The key point here is to be open to different styles of papers. It’s not necessary that a paper be only one type (even if the PCS paper submission system forces the author(s) into making only one selection).

• Spend more review time on the middle – For papers that you give a score around a 3 (on a 1-5 scoring system), spend a little more time and explain your thoughts even further than normal. This will help the Papers Chairs and/or Program Committee when considering the large pile of papers having similar middling scores at the end. By all means, if you really liked a paper and gave it a high score, do explain why, but it’s not quite so crucial to elaborate on every nuance. Similarly, if a paper clearly has many problems and won’t be accepted, extra review time isn’t quite so crucial. For papers that may be “on the fence”, however, carefully and clearly explaining the strengths and limitations of those papers can be very beneficial to the people above you making the final decisions on acceptance.

• Defend papers you like – If you’ve reviewed a paper and you feel it makes a worthwhile contribution, give it a good score and defend your point of view in subsequent discussions with the other reviewers. Particularly if your view is a minority opinion, you can feel pressure to be like the others and not to be seen as being too “easy”. Stand up for your point of view. There simply aren’t that many papers receiving strong, positive reviews. When you find one, go to bat for it and explain all the good things you saw in it.

• Don’t require a user study – OK, here’s one that’s going to ruffle a few feathers. There is virtually nothing that I dislike more in a review than reading, “The paper has no user study, thus I’m not able to evaluate its quality/utility.” Simply put, that’s hogwash. You have been asked to be a reviewer for this prestigious conference or journal, so presumably that means you have good knowledge about this research area. You’ve read about the project in the paper, so judge its quality and utility. Would a user study, which is often small, over-simplified, and assessing relatively unimportant aspects of a system really convince you of its quality? If so, then I think you need higher standards. Now, a paper should convince the reader that its work is an innovative contribution and/or does provide utility. But there are many ways to do that, and I feel others are often better than (simple) user studies. My 2014 BELIV Workshop article argues that visualization papers can do a better job explaining utility and value to readers through mechanisms such as example scenarios of use and case studies. Unfortunately, user studies on visualization systems often require participants to perform simple tasks that don’t adequately show a system’s value and that easily could be performed without visualization. Of course, there are certain types of papers that absolutely do require user studies. For example, if authors introduce a new visualization technique for a particular type of data and they claim that this technique is better than an existing one, then this claim absolutely should be tested. Relatively few papers of that style are submitted, however.

• Be open-minded to new work in your area – I’ve noticed reviewers who have done prior research and published articles in an area who then act like gatekeepers to that area. Similar to the “Thinking it’s all like your work” item above, this issue concerns reviewers whose past work on a topic seems to close their mind to new approaches and ideas. (It’s even led me, as a Papers Chair, to give less credence to an “expert” review because I felt like the individual was not considering a paper objectively.) I suspect there may be a bit of human nature at work here again. New approaches to a problem might seem to diminish the attention on past work. I’ve observed well-established researchers who seem to act as if they are “defending the turf” of a topic area – Nothing is good enough for them; no new approach is worthwhile. Well, don’t be one of those close-minded people. Welcome new ideas to a topic. Those ideas actually might not be that similar to yours if you think more objectively. In the past, I have sometimes thought that we might have a much more interesting program at our conferences if all papers received reviews from non-experts on a topic. People without preexisting biases usually seem to better identify the interesting and exciting projects. Of course, views of experts are still important because they bring the detailed knowledge needed sometimes to point out potential subtle nuances and errors.

 

Hopefully, these points identify a number of practical ways for researchers to become better reviewers. I’ve observed each of these problems occur over and over in the conferences that I’ve chaired. My hope is that this column will lead us all to reflect on our practices and habits when reviewing submitted papers. I personally am an advocate that reviews should be shared and published. At a minimum, faculty advisors can share their reviews with their students to help them learn about the process. It can become another important component of our training future researchers. Furthermore, I’d be in favor of publishing all the reviews of accepted papers to promote more transparency about the review process and to facilitate a greater discourse about the research involved in each article.

I look forward to hearing your views about these items. Do any strike a particular chord with you?  Disagree about some?  Please feel free to leave a comment and continue the discussion on this important topic.

Impressions from VIS ’16

The VIS 2016 Conference in Baltimore was held just over a month ago. During the conference, I jotted down a few thoughts and impressions that have served as the basis for this post. The goal here  of particular papers, but is instead some high-level observations about trends and topics of conversation among attendees while there.

One big theme of the conference to me this year was visualization education and pedagogy. I think this thread piggybacked on the excellent Education Panel at VIS ’15 in Chicago. A key idea to emerge from that panel was the use of active learning methods and interactive exercises in visualization courses. At the panel, Marti Hearst from Berkeley and Eytan Adar from Michigan talked about their use of such methods in their respective classes. This fall I’ve tried to incorporate some of these kinds of activities in my graduate CS 7450 Information Visualization course. While my course still primarily follows a lecture/Q&A style (with plenty of videos and demos thrown in), I’ve sought to have at least one interactive exercise per class. In general, these exercises have followed one of two styles. First, I have the class generate analytic tasks or questions for the topic being covered that day. This is particularly effective in the section of the course where we examine visualizations of different types of data (time series, network, hierarchy, text, etc.). A second technique I’ve used is to give a small design challenge, and have students pair up and create visualization design ideas for about 10 minutes. Volunteers then show their designs and we discuss each’s pluses and minuses.

Getting back to this year’s conference, the education focus began with a workshop on pedagogical issues in data visualization. It was exciting to see so many attendees in this workshop, and most seemed to be teaching visualization courses at their respective schools. This theme continued in the main conferences at the meeting: InfoVis had a session with education as a primary theme, and VAST had a session where most of the papers were about visual analytics systems for analyzing and understanding data generated from MOOC classes. The majority of these papers were from Hong Kong University of Science and Technology.

A second theme of the meeting this year that I found interesting was simply “color.” From Theresa-Marie Rhyne’s tutorial to Brown University’s InfoVis paper about the Colorgorical system to the InfoVis Best Poster about Colour Palettes, color seemed to be a topic on everyone’s mind this year. Of course, that’s not surprising at a visualization conference, but it just seemed to have increased emphasis this year. I think that’s a great thing. It helps all of us visualization researchers to have visual perception experts teach us more about all issues color-related.

Another big topic of conversation at the meeting was the panel “On the Death of Scientific Visualization.” It’s been pretty obvious, both via number of submissions and attendance in the meeting rooms, that for a few years now interest in infovis and visual analytics have been expanding while that for scivis has been contracting. I don’t conclude from this that scivis is going away, however. The continued development of better techniques for scientific visualization is extremely important. I simply view this changing interest as being a function of the potential audience in these different subareas. The audience for scientific visualization is just that – scientists, for the most part. This is a relatively small set of people, but extremely important ones! The audience for infovis tools is much bigger, and in many cases, is the general public at large.

I think a huge turning point in these conferences was the InfoVis ’07 Conference in Sacramento. One session of the conference was titled “InfoVis for the Masses.” That was a theme echoing throughout the community that year as Hans Rosling’s GapMinder system and TED video had everyone talking, IBM’s ManyEyes system was extremely popular, and the NY Times had begun to excel at data-driven storytelling on their website. From that point forward, infovis grew tremendously in interest and popularity. So what I see with scivis currently is not at all the “death” of that field. I simply believe that InfoVis and VAST have grown tremendously and they each have a broader reach.

On Monday of conference week I attended the BELIV Workshop that focuses on evaluation-related issues in visualization. I’ve been fortunate to have attended every one of the BELIV workshops going back to the very first one in 2006 in Venice, Italy (not a bad spot for a meeting). I’ve long thought that the evaluation challenge – how do we tell why a specific visualization is more effective than another – is one of the very top open problems in visualization research. Unfortunately, many traditional HCI-based evaluation methods simply don’t get the job done of comparing visualization’s utility, appeal, and effectiveness. (This idea was at the heart of my value-driven evaluation paper from the BELIV ’14 workshop.)

Reflecting back, I have to admit that I’ve been a little disappointed in the paper contributions at BELIV over the past couple meetings. It just doesn’t seem like interesting, new, useful ideas are emerging on this topic. I think that’s partly understandable as this is a very difficult problem to address – That’s what makes it such an important, challenging open problem to our community. But hopefully we’ll see some innovative evaluation methods and new approaches develop over the next few years. This is a great problem for young researchers to take on.

My final thought about the conference this year emerged as I sat through one of the last paper sessions and struggled to understand the research being presented in it, much as I had done for many of the earlier sessions. While part of this might be explained by the quality of the talks themselves (Jean-luc Dumont’s captivating capstone talk emphasized that issue as did Robert Kosara’s blog on common speaking mistakes), I don’t think that was the primary reason. I simply see it as a natural maturing of the field. Many of the individual subareas of visualization research (geovis, text vis, vis for ML, network vis, biomedical vis, time series data vis, etc.) have matured significantly now and have their own rich body of existing papers. To make a new contribution in these areas, one needs to do some very advanced research. Hence, it shouldn’t be too surprising that it is difficult for someone not well-versed in all the subarea literature to have difficulty following the papers in that session of the conference.

I see this as a natural maturation of our field – Something that occurs in other domains as well and is simply difficult to avoid. It’s kind of too bad in a way though because I think it makes the conference papers as a whole a little less accessible to newcomers not having a deep visualization background or even us old-timers who haven’t kept up on a specific subarea. But it shows that as a community we are growing, making progress, and solving problems, all good things.

Well, those are some summary thoughts from VIS this year. I’m looking forward to next year’s conference in Phoenix, a city that I have never visited before. Ross Maciejewski tells me that the conference will take place in a nice area downtown and it will definitely be warm!

Next column: Being a good visualization paper reviewer

Starting a Blog

For quite a while now, I’ve been thinking about starting a blog. Some of my colleagues at Georgia Tech and other schools write them and I’ve enjoyed reading their thoughts and hearing their opinions in those forums. I’ve had ideas for topics from time to time, but was just always too busy to make it happen. Well, I finally decided to give up on that excuse.

I likely won’t write columns all that frequently, but I hope to put a new one together every couple months or so. I’ll write mostly about computer science and data visualization research, the focus of my work.  More and more, I’ve found myself with ideas that aren’t appropriate for an academic paper, but I think could be of interest to the community at large. This seems like a good home for them.

Coming next: Impressions from VIS ’16.