Category Archives: Visualization

Open Access VIS

The purpose of Open Access Vis is to highlight open access papers, materials, and data and to see how many papers are unavailable outside of a paywall. See the about page for more details about reliable open access.

Why?

Most visualization research papers are funded by the public, reviewed and edited by volunteers, and formatted by the authors. So for IEEE to charge $33 for each person who wants to read the paper is… well… (I’ll let you fill in the blank). This paywall is contrary to the supposedly public good of research and the claim that visualization research helps practitioners (who are not on a university campus).

But there’s an up side. IEEE specifically allows authors to post their version of a paper (not the IEEE version with a header and page numbers) to:

  • The author’s website
  • The institution’s website (e.g., lab site or university site)
  • A pre-print repository (which gives it a static URL and avoids “link rot”)

Badges

Continue reading

Guide to user performance evaluation at InfoVis 2016

Previous years: 2013, 2014, 2015

The goal of this guide is to highlight vis papers that demonstrate evidence of a user performance benefit. I used two criteria:

  1. The paper includes an experiment measuring user performance (e.g. accuracy or speed)
  2. Analysis of statistical differences determined whether results were reliable.

I did not discriminate beyond those two criteria. However, I am using a gold star to highlight one property that only a few papers have: a generalizable explanation for why the results occurred. You can read more about explanatory hypotheses here.

Continue reading

A Look at the Keywords in InfoVis 2016 Submissions

It’s that time of year again. InfoVis abstracts have been submitted, and lots of people are scrambling to finish their full submission.

I was curious about the distribution of keywords in the submissions, so I visualized some of the data available to the program committee (PC). After checking with the chairs, I thought others might be curious about the results.

Note that these are only abstracts, so there will probably be some attrition before the full paper submission deadline. To see a MUCH more thorough analysis of multiple years and venues, check out http://keyvis.org

Continue reading

citation not always needed (via XKCD)

Why I Don’t Write a “Related Work” Section

Imagine someone explaining a complex topic, like how to improve the fuel efficiency of a boat. But shortly after starting the explanation, they go off on a series of tangents about pretty boats they’ve seen, big boats, rubber ducks, submarines, and other transportation vehicles such as the new 787 by Boeing et al. Then they return to the explanation of boat efficiency without ever referencing why they brought up those strange tangents.

Tangents are confusing, and they hurt clarity. The related work section is often just a string of unrelated tangents, which is a waste of the reader’s time.

Now let me make something clear: I am not necessarily saying that papers should cite fewer sources. Instead, each citation should serve an obvious, specific purpose. And if that purpose is so tangential to the structure of your argument that you need to put it in what amounts to a citation dumping ground, then it isn’t needed.

What’s the purpose of a citation?

Continue reading

InfoVis 2014 – The methods papers

This year, 40% of InfoVis papers included an empirical evaluation. I made a list in my last post.

There were also a couple papers worth noting that described methods for evaluating visualizations. These papers can help bootstrap future evaluations, leading to a better understanding of when and why vis techniques are effective.

Learning Perceptual Kernels for Visualization Design – Çağatay Demiralp, Michael Bernstein, Jeffrey Heer pdf
A collection of methods are described to find the relative discriminability of feature values (e.g. colors or shapes). It also looks at finding the descriminability of combinations of visual features (e.g. colors and shapes). The paper validates its approach by determining the discriminability of size and showing which of their measures closely match the established Steven’s power law for size.

A Principled Way of Assessing Visualization Literacy – Jeremy Boy, Ronald Rensink, Enrico Bertini, Jean-Daniel Fekete pdf
This paper describes how to use Item Response Theory – a technique common in psychometrics and education literature – to assess a person’s “literacy” or skill with visualizations. I would have liked to have seen the approach validated (or at least compared) with some external factor like the person’s experience with visualization. Understandably, that can be tough to measure, but this method certainly shows promise for explaining individual differences in user performance.

Guide to user performance evaluation at InfoVis 2014

The goal of this guide is to help highlight papers that demonstrate evidence of a user performance benefit. I used two criteria:

  1. The paper includes an experiment measuring user performance (e.g. accuracy or speed)
  2. Analysis of statistical differences determined whether results were reliable.

I did not discriminate beyond those two criteria. However, I am using a gold star to highlight one property that only a few papers have: a generalizable explanation for why the results occurred. You can read more about explanatory hypotheses here.

Continue reading

So much we don’t know about visualization

It’s always amazing how many basic visualization questions are yet to be answered. Robert Kosara raised one yesterday: What is the most effective way to show large scale differences?

Rather than using a bar chart to represent values, he made a demo that sequentially shows dots to demonstrate how many more times a CEO makes than a worker. His solution looked compelling, but I realized that I don’t know of any literature in vis that has empirically tackled this problem. A goal as simple as visualizing a pair of values of very different scale has few (if any) guidelines.

Furthermore, although there have been a few papers on animation in charts (e.g. [2, 4]), the basic approach of using animation to represent a single value still has many unanswered questions.

Robert’s demo used both numerocity and duration of the animation to visualize each value. I forked his code to make a demo of some alternative animation styles (options at the bottom), but I don’t know of any literature that hints if or why one would be better than another:

Continue reading

Mysterious Origins of Hypotheses in Visualization and CHI

For years, I’ve noticed a strange practice in Visualization and CHI. When describing a study, many papers list a series of predictions and number them as H1, H2, H3… For example:

  • H1: Red graphs are better than blue graphs
  • H2: Participants will read vertical bar graphs more quickly than horizontal bar graphs

I have never seen this practice in any other field, and I was curious as to the origin.

Half Hypotheses

Although these statements are referred to as ‘hypotheses’, they’re not… at least, not completely. They are predictions. The distinction is subtle but important. Here’s the scientific definition of hypothesis according to The National Academy of Sciences:

A tentative explanation for an observation, phenomenon, or scientific problem that can be tested by further investigation…

The key word here is explanation. A hypothesis is not simply a guess about the result of an experiment. It is a proposed explanation that can predict the outcome of an experiment. A hypothesis has two components: (1) an explanation and (2) a prediction. A prediction simply isn’t useful on its own. If I flip a coin and correctly guess “heads”, it doesn’t tell me anything other than that I made a lucky guess. A hypothesis would be: the coin is unevenly weighted, so it is far more likely to land heads-up. It has an explanation (uneven weighting) that allows for a prediction (frequently landing heads-up).

The Origin of H1, H2, H3…

Besides the unusual use of the term “hypothesis”, where does the numbering style come from? It appears in many IEEE InfoVis and ACM CHI papers going back to at least 1996 (maybe earlier?). However, I’ve never seen it in psychology or social science journals. The best candidate I can think of for the origin of this numbering is a misunderstanding of null hypothesis testing, which can be best explained with an example. Here is a null hypothesis with two alternative hypotheses:

  • H0: Objects do not affect each other’s motion (null hypothesis)
  • H1: Objects attract each other, so a ball should fall towards the Earth
  • H2: Objects repel each other, so a ball should fly away from the Earth

Notice that the hypotheses are mutually exclusive, meaning only one can be true. In contrast, Vis/CHI-style hypotheses are each independent, and all or none of them can be true. I’m not sure how one came to be transformed into the other, but it’s my best guess for the origins.

Unclear

On top of my concerns about diction or utility, referring to statements by number hurts clarity. Repeatedly scrolling back and forth trying to remember “which one was H3 again?” makes reading frustrating and unnecessarily effortful. It’s a bad practice to label variables in code as var1 and var2. Why should it be better to refer to written concepts numerically? Let’s put an end to these numbered half-hypotheses in Vis and CHI.

Do you agree with this perspective and proposed origin? Can you find an example of this H numbering from before 1996? Or in another field?

Guide to user performance evaluation at InfoVis 2013

When reading a paper (vis or otherwise), I tend to read the title and abstract and then jump straight to the methods and results. Besides the claim of utility for a technique or application, I want to understand how the paper supports its claim of improving users’ understanding of the data. So I put together this guide to the papers that ran experiments comparatively measuring user performance.

1. Common Angle Plots as Perception-True Visualizations of Categorical Associations – Heike Hofmann, Marie Vendettuoli – PDF
Tuesday 12:10 pm

2. What Makes a Visualization Memorable? – Michelle A. Borkin, Azalea A. Vo, Zoya Bylinskii, Phillip Isola, Shashank Sunkavalli, Aude Oliva, Hanspeter Pfister – PDF
Tuesday 2:00 pm

3. Perception of Average Value in Multiclass Scatterplots – Michael Gleicher, Michael Correll, Christine Nothelfer, Steven Franconeri – PDF
Tuesday 2:20 pm

4. Interactive Visualizations on Large and Small Displays: The Interrelation of Display Size, Information Space, and Scale – Mikkel R. Jakobsen, Kasper Hornbaek – PDF
Tuesday 3:00 pm

5. A Deeper Understanding of Sequence in Narrative Visualization – Jessica Hullman, Steven Drucker, Nathalie Henry Riche, Bongshin Lee, Danyel Fisher, Eytan Adar – PDF
Wednesday 8:30 am

6. Visualizing Request-Flow Comparison to Aid Performance Diagnosis in Distributed Systems – Raja R. Sambasivan, Ilari Shafer, Michelle L. Mazurek, Gregory R. Ganger – PDF
Wednesday 10:50 am

7. Evaluation of Filesystem Provenance Visualization Tools – Michelle A. Borkin, Chelsea S. Yeh, Madelaine Boyd, Peter Macko, Krzysztof Z. Gajos, Margo Seltzer, Hanspeter Pfister – PDF
Wednesday 11:10 am

8. DiffAni: Visualizing Dynamic Graphs with a Hybrid of Difference Maps and Animation – Sébastien Rufiange, Michael J. McGuffin – PDF
Thursday 2:00 pm

9. Edge Compression Techniques for Visualization of Dense Directed Graphs – Tim Dwyer, Nathalie Henry Riche, Kim Marriott, Christopher Mears – PDF
Thursday 3:20 pm

Less than a quarter

Only 9 out of 38 InfoVis papers (24%) this year comparatively measured user performance. While that number has improved and doesn’t need to be 100%, less than a quarter just seems low.

Possible reasons why more papers don’t evaluate user performance

  • Limited understanding of experiment design and statistical analysis. How many people doing vis research are familiar with different experiment designs like method of adjustment or forced-choice? How many have run a t-test or a regression?
  • Evaluation takes time. A paper that doesn’t evaluate user performance can easily scoop a similar paper with a thorough evaluation.
  • Evaluation takes space. Can a novel technique and an evaluation be effectively presented within 10 pages? Making better use of supplemental material may solve this problem.
  • Risk of a null result. It’s hard – if possible at all – to truly “fail” in a technique or application submission. But experiments may reveal no statistically significant benefit.
  • The belief that the benefit of a vis is obvious. We generally have poor awareness of our own attentional limitations, so it’s actually not always clear what about a visualization doesn’t work. Besides being poor at assessing our abilities, it’s also important to know for which tasks a novel visualization is better than traditional methods (e.g. excel and sql queries) vs. when the traditional methods are better.
  • A poisoned well. If a technique or application has already been published without evaluation, reviewers would scoff at an evaluation that merely confirms what was already assumed. So an evaluation of past work would only be publishable if it contradicts the unevaluated assumptions. It’s risky to put the time into a study if positive results may not be publishable.

I’m curious to hear other people’s thoughts on the issue. Why don’t more papers have user performance evaluations? Should they?

P.S. Check out this paper looking at evaluation in SciVis.

Science and War – Visualizing U.S. Budget Priorities

Neil deGrass Tyson recently noted that the 2008 bank bailout was larger than the total 50 history of NASA’s budget. Inspired by that comparison, I decided to look at general science spending relative to the defense budget. How do we prioritize our tax dollars?

This information quest also gave me an opportunity to try using Tableau to visualize the results.

With science spending in green and military spending in red, the difference is enormous. In fact annual military spending is greater than the total cost of NASA’s entire history (adjusted for inflation).

NASA budget 2

Interactive version hosted by Tableau

Note: Tableau Public went down while I was trying to make this chart. During that time, I couldn’t save or open anything! The lesson here is to be cautious when using Tableau Public.