Guide to user performance evaluation at InfoVis 2017

Previous years: 2013, 2014, 2015, 2016

The goal of this guide is to highlight vis papers that demonstrate evidence of a user performance benefit. I used two criteria:

  1. The paper includes an experiment measuring user performance (e.g. accuracy or speed)
  2. Analysis of statistical differences determined whether results were reliable.

I did not discriminate beyond those two criteria. However, I am using a gold star to highlight one property that only a few papers have: a generalizable explanation for why the results occurred. You can read more about explanatory hypotheses here.


Active Reading of Visualizations – Jagoda Walny, Samuel Huron, Charles Perin, Tiffany Wun, Richard Pusch, and Sheelagh Carpendale
pdf

Assessing the Graphical Perception of Time and Speed on 2D + Time Trajectories – Charles Perin, Tiffany Wun, Richard Pusch, and Sheelagh Carpendale
pdf

ExplanatoryBlinded with Science or Informed by Charts? A Replication Study – Pierre Dragicevic and Yvonne Jansen
pdf

Conceptual and Methodological Issues in Evaluating Multidimensional Visualizations for Decision Support – Evanthia Dimara, Anastasia Bezerianos, and Pierre Dragicevic
pdf

ExplanatoryData Through Others’ Eyes: The Impact of Visualizing Others’ Expectations on Visualization Interpretation – Yea-Seul Kim, Katharina Reinecke, and Jessica Hullman
pdf

EdWordle: Consistency-preserving Word Cloud Editing – Yunhai Wang, Xiaowei Chu, Chen Bao, Lifeng Zhu, Oliver Deussen, Baoquan Chen, and Michael Sedlmair
pdf

The Hologram in My Hand: How Effective is Interactive Exploration of 3D Visualizations in Immersive Tangible Augmented Reality? – Benjamin Bach, Ronell Sicat, Maxime Cordeil, Johanna Beyer, and Hanspeter Pfister
pdf

Imagining Replications: Graphical Prediction & Discrete Visualizations Improve Recall & Estimation of Effect Uncertainty – Jessica Hullman, Matthew Kay, Yea-Seul Kim, and Samana Shrestha
pdf

Modeling Color Difference for Visualization Design – Danielle Albers Szafir
pdf

Open vs Closed Shapes: New Perceptual Categories? – David Burlinson, Kalpathi Subramanian, and Paula Goolkasian
[This publication is hidden]

ExplanatoryTaking Word Clouds Apart: An Empirical Investigation of the Design Space for Keyword Summaries 
Cristian Felix, Enrico Bertini, and Steven Franconeri
pdf

Visualizing Nonlinear Narratives with Story Curves
Nam Wook Kim, Benjamin Bach, Hyejin Im, Sasha Schriber, Markus Gross, and Hanspeter Pfister
pdf

Evaluating Cartogram Effectiveness
Sabrina Nusrat, Muhammad Jawaherul Alam, Stephen Kobourov
pdf

Evaluating Interactive Graphical Encodings for Data Visualization
Bahador Saket, Arjun Srinivasan, Eric D. Ragan, Alex Endert
pdf

ExplanatoryPerceptual Biases in Font Size as a Data Encoding
Eric Carlson Alexander, Chih-Ching Chang, Mariana Shimabukuro, Steve Franconeri, Christopher Collins, Michael Gleicher
pdf

A flat trend

31% of InfoVis conference papers measured user performance. Despite the proportion of papers with experiments rising, there has been little change in the proportion measuring performance.

Here’s the Aggresti-Coull binomial 84% CI, so each proportion can be compared.

2016

Little Generalization

There is still a very low proportion of papers with an explanatory hypothesis that can inform generalizability. This assessment is tough to assign, but I try to be very generous with this assessment. Very few papers attempt to explain why or if the results are applicable outside of the specific conditions of the study. Also, there are still a lot of guesses presented as hypotheses.

Obviously, please let me know if you find a mistake or think I missed something. Also, please hassle any authors who didn’t make their pdf publicly available.