This year, 40% of InfoVis papers included an empirical evaluation. I made a list in my last post.
There were also a couple papers worth noting that described methods for evaluating visualizations. These papers can help bootstrap future evaluations, leading to a better understanding of when and why vis techniques are effective.
Learning Perceptual Kernels for Visualization Design – Çağatay Demiralp, Michael Bernstein, Jeffrey Heer pdf
A collection of methods are described to find the relative discriminability of feature values (e.g. colors or shapes). It also looks at finding the descriminability of combinations of visual features (e.g. colors and shapes). The paper validates its approach by determining the discriminability of size and showing which of their measures closely match the established Steven’s power law for size.
A Principled Way of Assessing Visualization Literacy – Jeremy Boy, Ronald Rensink, Enrico Bertini, Jean-Daniel Fekete pdf
This paper describes how to use Item Response Theory – a technique common in psychometrics and education literature – to assess a person’s “literacy” or skill with visualizations. I would have liked to have seen the approach validated (or at least compared) with some external factor like the person’s experience with visualization. Understandably, that can be tough to measure, but this method certainly shows promise for explaining individual differences in user performance.