Guide to user performance evaluation at InfoVis 2014

The goal of this guide is to help highlight papers that demonstrate evidence of a user performance benefit. I used two criteria:

  1. The paper includes an experiment measuring user performance (e.g. accuracy or speed)
  2. Analysis of statistical differences determined whether results were reliable.

I did not discriminate beyond those two criteria. However, I am using a gold star to highlight one property that only a few papers have: a generalizable explanation for why the results occurred. You can read more about explanatory hypotheses here.

DimpVis: Exploring Time-varying Information Visualizations by Direct Manipulation – Brittany Kondo, Christopher Collins.  video  pdf
Tuesday 11:10

Explanatory Tree Colors: color schemes for tree structured data – Martijn Tennekes, Edwin de Jonge.  video [not public]
Tuesday 15:20

(TVCG) GraphDiaries: Animated Transitions and Temporal Navigation for Dynamic Networks – Benjamin Bach, Emmanuel Pietriga, Jean-Daniel Fekete.  video pdf
Tuesday 17:15

(TVCG) Visual Adjacency Lists for Dynamic Graphs – Marcel Hlawatsch, Michael Burch, Daniel Weiskopf.  video pdf
Tuesday 17:35

(TVCG) How to Display Group Information on Node-Link Diagrams: an Evaluation – Radu Jianu, Adrian Rusu, Yifan Hu, Douglas Taggart.  video pdf
Tuesday 17:55

The Effects of Interactive Latency on Exploratory Visual Analysis – Zhicheng Liu, Jeffrey Heer. video pdf
Wednesday 10:30

Error Bars Considered Harmful: Exploring Alternate Encodings for Mean and Error – Michael Correll, Michael Gleicher. pdf
Wednesday 11:10

Four Experiments on the Perception of Bar Charts– Justin Talbot, Vidya Setlur, Anushka Anand. [not public]
Wednesday 11:30

Explanatory The Persuasive Power of Data Visualization – Anshul Vikram Pandey, Anjali Manivannan, Oded Nov, Margaret Satterthwaite, Enrico Bertini. video pdf
Wednesday 15:20

Comparative Eye Tracking Study on Node-Link Visualizations of Trajectories – Rudolf Netzel, Michael Burch, Daniel Weiskopf. video  [not public]
Thursday 8:30

Node, Node-Link, and Node-Link-Group Diagrams: An Evaluation – Bahador Saket, Paolo Simonetto, Stephen Kobourov, Katy Borner. video  pdf
Thursday 8:50

Explanatory The Not-so-Staggering Effect of Staggered Animated Transitions on Visual Tracking – Fanny Chevalier, Pierre Dragicevic, Steven Franconeri. video pdf
Thursday 9:10

Explanatory The Influence of Contour on Similarity Perception of Star Glyphs – Johannes Fuchs, Petra Isenberg, Anastasia Bezerianos, Fabian Fischer, Enrico Bertini. video pdf
Thursday 9:30

Order of Magnitude Markers: An Empirical Study on Large Magnitude Number Detection – Rita Borgo, Joel Dearden, Mark W. Jones. video pdf
Thursday 9:50

Explanatory Learning Perceptual Kernels for Visualization Design – Çağatay Demiralp, Michael Bernstein, Jeffrey Heer. pdf
Thursday 16:15

Explanatory Ranking Visualization of Correlation Using Weber’s Law – Lane Harrison, Fumeng Yang, Steven Franconeri, Remco Chang. video pdf
Thursday 16:35

Explanatory The relation between visualization size, grouping, and user performance – Connor Gramazio, Karen Schloss, David Laidlaw.  video [not public]
Thursday 16:55

A Principled Way of Assessing Visualization Literacy – Authors: Jeremy Boy, Ronald Rensink, Enrico Bertini, Jean-Daniel Fekete. pdf
Thursday 17:15

Reinforcing Visual Grouping Cues to Communicate Complex Informational Structure – Juhee Bae, Benjamin Watson. video [not public]
Thursday 17:35

How Hierarchical Topics Evolve in Large Text Corpora – Weiwei Cui, Shixia Liu, Zhuofeng Wu, Hao Wei. video pdf
Friday 8:50

(TVCG) Similarity Preserving Snippet-Based Visualization of Web Search Results – Erick Gomez-Nieto, Frizzi San Roman, Paulo Pagliosa, Wallace Casaca, Elias S. Helou, Maria Cristina F. de Oliveira, Luis Gustavo Nonato. video pdf
Friday 9:30

Effects of Presentation Mode and Pace Control on Performance in Image Classification – Paul van der Corput, Jarke J. van Wijk.  video [not public]
Friday 9:50

Huge Growth

At 40% of InfoVis conference papers, this list is much longer than last year’s (26%). For TVCG InfoVis, 4 out of 6 papers (67%) evaluated user performance.

It’s interesting to see this huge shift happen from last year. I don’t know the cause or whether it will be a long term increase as opposed to a spike.

SciVis too

I limited the list to InfoVis because that tends to be where the majority of generalizable experiments occur. Also, I’m not such a masochist that I’m going to read through all the papers in both conferences. However, SciVis appears to have some great papers too.

Generalizability

Despite a large number of papers with quantitative experiments, only a few proposed an explanation for the results. I was very generous here, as I categorized any paper with a cited explanation for the results as “explanatory” even if the paper didn’t explicitly make the connection. This topic probably warrants its own post, but a critical question to ask about any experiment is how broadly applicable the results are. How much can you change about a tested technique before the experiments results become invalid? What is the grounds for assuming that the results are applicable to any circumstance beyond the specific implementation, data type, and users in the study?

Obviously, please let me know if you find any mistakes. Also, please hassle any authors who didn’t make their pdf available.