Guide to user performance evaluation at InfoVis 2015

The goal of this guide is to highlight vis papers that demonstrate evidence of a user performance benefit. I used two criteria:

  1. The paper includes an experiment measuring user performance (e.g. accuracy or speed)
  2. Analysis of statistical differences determined whether results were reliable.

I did not discriminate beyond those two criteria. However, I am using a gold star to highlight one property that only a few papers have: a generalizable explanation for why the results occurred. You can read more about explanatory hypotheses here.


A Psychophysical Investigation of Size as a Physical Variable – Yvonne Jansen, Kasper Hornbaek.
video pdf

Explanatory Beyond Memorability: Visualization Recognition and Recall – Michelle A. Borkin, Zoya Bylinskii, Nam Wook Kim, Constance May Bainbridge, Chelsea S. Yeh, Daniel Borkin, Hanspeter Pfister, Aude Oliva.
video pdf

Explanatory Beyond Weber’s Law: A Second Look at Ranking Visualizations of Correlation – Matthew Kay, Jeffrey Heer.
video pdf

Comparing Color and Leader Line Highlighting Strategies in Coordinated View Geovisualizations – Amy L. Griffin, Anthony C. Robinson.
video pdf

Guidelines for Effective Usage of Text Highlighting Techniques – Hendrik Strobelt, Daniela Oelke, Bum Chul Kwon, Tobias Schreck, Hanspeter Pfister.
video pdf

HOLA: Human-like Orthogonal Network Layout – Steve Kieffer, Tim Dwyer, Kim Marriott, Michael Wybrow.
video pdf

Improving Bayesian Reasoning: The Effects of Phrasing, Visualization, and Spatial Ability – Alvitta Ottley, Evan M. Peck, Lane T. Harrison, Daniel Afergan, Caroline Ziemkiewicz, Holly A. Taylor, Paul K. J. Han, Remco Chang.
video pdf

Off the Radar: Comparative Evaluation of Radial Visualization Solutions for Composite Indicators – Yael Albo, Joel Lanir, Peter Bak, Sheizaf Rafaeli.
video [This publication is hidden]

Orientation-Enhanced Parallel Coordinate Plots – Renata Raidou, Martin Eisemann, Marcel Breeuwer, Elmar Eisemann, Anna Vilanova.
video pdf

Perception-based Evaluation of Projection Methods for Multidimensional Data Visualization – Ronak Etemadpour, Robson Motta, Jose Gustavo de Souza Paiva, Rosane Minghim, Maria Cristina Ferreira de Oliveira, Lars Linsen.
video [This publication is hidden]

Representing Uncertainty in Graph Edges: An Evaluation of Paired Visual Variables – Hua Guo, Jeff Huang, David H. Laidlaw.
video pdf

SchemeLens: A Content-Aware Vector-Based Fisheye Technique for Navigating Large Systems Diagrams – Aurelie Cohe, Bastien Liutkus, Gilles Bailly, James Eagan, Eric Lecolinet.
video pdf

Explanatory Spatial Reasoning and Data Displays – Susan VanderPlas, Heike Hofmann.
video [This publication is hidden]

Suggested Interactivity: Seeking Perceived Affordances for Information Visualization – Jeremy Boy, Louis Eveillard, Francoise Detienne, Jean-Daniel Fekete.
video pdf

TimeNotes: A Study on Effective Chart Visualization and Interaction Techniques for Time-Series Data – James Walker, Rita Borgo, Mark Jones.
video pdf

VectorLens: Angular Selection of Curves within 2D Dense Visualizations – Maxine Dumas, Michael McGuffin, Patrick Chasse.
video pdf

Visual Encodings of Temporal Uncertainty: A Comparative User Study – Theresia Gschwandtner, Markus Bogl, Paolo Federico, Silvia Miksch.
video [This publication is hidden]

Voyager: Exploratory Analysis via Faceted Browsing of Visualization Recommendations – Kanit Wongsuphasawat, Dominik Moritz, Anushka Anand, Jock Mackinlay, Bill Howe, Jeffrey Heer.
video pdf

UnTangle Map: Visual Analysis of Probabilistic Multi-Label Data – Nan Cao, Yu-Ru Lin, David Gotz.
video [This publication is hidden]

No Clear Change

37% of InfoVis conference papers measured user performance. Last year, I thought there was a big change, but the variance is indistinguishable from noise.

Here’s the Aggresti-Coull binomial 84% CI, so each proportion can be compared.

stats

The Journal Articles

It’s great that the distinction between conference papers and journal papers is fading away at least in terms of the conference program. I’m only maintaining it for historical comparison.

In the chart on the right, I collapsed the past three years of data and recomputed the means and CIs. For TVCG, I’m only include papers presented under the InfoVis track. TVCG has more papers with performance evaluation, and it can’t simply be explained by random noise. I don’t know if the difference is caused by where papers are submitted, the different review process, different reviewers, or rolling submissions being more conducive to running a study. But more of the journal papers test their claims.

Honorable Mentions

In the last couple years, not a single best paper or honorable mention attempted to experimentally validate their claims. This year changed that trend with awards going to papers that measured (or reanalyzed) user performance.

Little Generalization

There is still a very low proportion of papers with an explanatory hypothesis that can inform generalizability. I try to be very generous with this assessment, but very few papers attempt to explain why or if the results are applicable outside of the specific conditions of the study. Also, there are still a lot of guesses presented as hypotheses.

Also see the guides for 2014 and 2013.

Obviously, please let me know if you find a mistake or think I missed something. Also, please hassle any authors who didn’t make their pdf publicly available.

One thought on “Guide to user performance evaluation at InfoVis 2015

  1. Pingback: Guide to user performance evaluation at InfoVis 2016 | Steve Haroz's blog

Comments are closed.