Author Archives: Steve Haroz

Open Access VIS 2019 – Part 3 – Who’s Who

This is part 3 of a multi-part post summarizing open practices in visualization research for 2019, as displayed on Open Access Vis. Research openness can either rely on policy or individual behavior. In this part, I’ll look at the individuals. Who in the visualization community is consistently sharing the most research? And who is not?

Related posts: 2017 overview, 2018 overview, 2019 part 1 – Updates and Papers, 2019 part 2 – Research Practices

Whose papers are open?

Many authors are sharing most or even all of their papers on open repositories, which is fantastic progress. But many are not, despite encouragement after acceptance. Easier options, better training, and formal policies will likely be necessary for a field-wide change in behavior. Continue reading

Posted in Open Science by . Leave a comment

Open Access VIS 2019 – Part 2 – Research Practices

This is part 2 of a multi-part post summarizing open practices in visualization research for 2019. See Open Access Vis for all open research at VIS 2019.

This post describes the sharing of research artifacts, or components, of the research process itself rather than simply the paper. I refer to sharing both these artifacts and the paper as “open research practices”.

Related posts: 2017 overview, 2018 overview, 2019 part 1 – Updates and Papers, 2019 part 3 – Who’s who?

Open research artifacts for 2019

I’ve broken research transparency into 4 artifacts and counted the number of papers on an open persistent repository that linked to each. I’ve given “partial credit” if the component is available but not on a persistent repository.

Continue reading

Posted in Open Science by . Leave a comment

Open Access VIS 2019 – Part 1 – Updates and Papers

The purpose of Open Access Vis is to highlight open access papers and transparent research practices on persistent repositories outside of a paywall. See the about page and my paper on Open Practices in Visualization Research for more details.

Most visualization research is funded by the public, reviewed and edited by volunteers, and formatted by the authors. So for IEEE to charge $33 for each person who wants to read the paper is… well… (I’ll let you fill in the blank). This paywall as well as the general opacity of research practices and artifacts is contrary to the supposedly public good of research and the claim that visualization research helps practitioners who are not on a university campus. And this need for accessibility extends to all research artifacts for both scientific scrutiny and applicability.

This is part 1 of a multi-part post summarizing open practices in visualization research for 2019.
Related posts: 2017 overview, 2018 overview, 2019 part 2 – Research Practices, 2019 part 3 – Who’s who?

Updates for 2019

Continue reading

Posted in Open Science by . Leave a comment

Updates for Open Access VIS in 2019

This year, there will be some small updates regarding how Open Access VIS works.

1. For papers, only persistent archives will be allowed

It’s great that you have a website or a github repositry, but link rot has been a serious problem on OAVIS with about 5% of PDFs disappearing each year. Reliable archives keep papers in a freely accessible, persistent, immutable, and uniquely identifiable way. Archives that meet these criteria include: Continue reading

Posted in Science by . Leave a comment

An open letter about open research practices at IEEE VIS

Last week, with great disappointment, I resigned from the position of open practice co-chair for the IEEE VIS conference. I delayed this blog post by several days to avoid an uncourteous surprise public announcement. Here is my open letter:  Continue reading

Posted in Science by .

Open Access VIS – updates for 2018

The purpose of Open Access Vis is to highlight open access papers, materials, and data and to see how many papers are available on reliable open access repositories outside of a paywall. See the about page for more details about reliable open access. Also, I just published a paper summarizing this initiative, describing the status of visualization research as of last year, and proposing possible paths for improving the field’s open practices: Open Practices in Visualization Research

Why?

Most visualization research papers are funded by the public, reviewed and edited by volunteers, and formatted by the authors. So for IEEE to charge $33 for each person who wants to read the paper is… well… (I’ll let you fill in the blank). This paywall is contrary to the supposedly public good of research and the claim that visualization research helps practitioners (who are not on a university campus).

But there’s an up side. IEEE specifically allows authors to post their version of a paper (not the IEEE version with a header and page numbers) to:

  • The author’s website
  • The institution’s website (e.g., lab site or university site)
  • A pre-print repository (which gives it a static URL and avoids “link rot”)

Badges

Continue reading

Posted in Uncategorized by .

Confusion about open science repositories

I recently gave a talk on Open Practices in Visualization Research at the workshop on Methodological Approaches for Visualization (BELIV). Unfortunately, with only a 10 minute talk, I had to leave out many important details, which has resulted in some confusion.

A few people have brought up concerns that repositories for open data and materials do not have long term viability. “What happens if the site shuts down in 5 years?” As an alternative, people have proposed storing data and materials in a pay-walled IEEE repository. While it’s good to hear that open access is being discussed, being informed is important for the discussion to be fruitful. So I’ll highlight some critical information about the Open Science Framework (OSF).

1. 50-year preservation fund

The Center for Open Science (COS) has a fund devoted specifically to preserving and maintaining the repository in case the organization ever shuts down. This fund would make a read-only form of the repository accessible for 50+ years. Here is a quote from the sustainability supplement in the COS’s strategic plan (page 24):

In the event of COS’s closing, the preservation fund guarantees long-term hosting and preservation of all the data and content stored on the OSF (50+ years based on present costs and use)

2. An open license with no paywall

Content posted to OSF can choose from a variety of open licenses. Any future work that builds upon the content, incorporates it into a meta-analysis, or scrutinizes it can freely access and link to the material. Openness facilitates research without needing to rely on an expensive subscription to the publisher. Furthermore, an open license means that future work will not require the original author give permission or even reply to emails.

On the other hand, some people want the content to be stored in IEEE’s digital library. That is exactly the opposite of open science. It would be behind a pay-wall (that’s not open). Also, IEEE would own the copyright of the data and material. Either IEEE or an obnoxious original author in fear of scrutiny could obstruct any attempt to publish work that reuses the content on licensing grounds (that’s not science).

3. No risk of lock-in

The openness of OSF allows people to copy their content elsewhere in the future. So there is little risk of being “stuck” with OSF if you don’t like it. If someone creates a better site, they could even mirror OSF’s content, so future open science systems could start with all of the information already on OSF.

4. Updates and edits to content

Like in version control, most open science repositories allow for updating content such that previous versions are always accessible. That approach allows for further updates such as added documentation or fixing typos without erasing the peer-reviewed version. In contrast, making a change to the IEEE digital library is a nightmare.

5. Templates for policies and submission forms

There have been some attempts by individuals and organizations such as ACM to “reinvent the wheel” by creating their own policies for open practice requirements and badges. These attempts often fail to consider flexibility and transparency in reporting.

Alternatively, the Transparency and Openness Promotion (TOP) guidelines have pre-written templates for modular policies that with various levels of strictness (from simply reporting whether it is available to mandatory submission) and for various artifacts (materials, collected data, analysis code, etc.). A table (artifact x sternness) summarizing the different policies is available on  the last page here.

  1. The full set of modular open policy templates with example implementations by various journals is available here.
  2. An author disclosure form for making submissions that request one of the open science badges is available here.

 

One final note: I’m not especially attached to OSF. There are alternatives such as zenodo and figshare, but OSF has the most full-featured set of services and has the most well-thought-out policies.

by
Photo by Florian Pérennès

Minimum Expectations for Open Data in Research

Open data allows people to independently check a paper’s analysis or perform an altogether new analysis. It’s also a way of allowing future work to perform meta-analyses and ask questions that may not have been asked in the original paper. Therefor, it’s important to make experiment data public, provide it completely, and make it accessible for it to be useful to others.

But many missteps can happen that reduce the value of open data. These tips should help ensure that your data is indeed open, useful, and accessible.

Continue reading

Posted in Science by .

Sharing Data and Materials for Anonymous Submission

Sharing experiment data and materials is a key component of open science and is becoming increasingly common (Kidwell et al. 2016).  But some in Visualization and HCI have expressed concern that this practice may not be compatible with anonymous submissions. Not true! Open data and open materials can easily be shared anonymously.
Continue reading

by

Guide to user performance evaluation at InfoVis 2017

Previous years: 2013, 2014, 2015, 2016

The goal of this guide is to highlight vis papers that demonstrate evidence of a user performance benefit. I used two criteria:

  1. The paper includes an experiment measuring user performance (e.g. accuracy or speed)
  2. Analysis of statistical differences determined whether results were reliable.

I did not discriminate beyond those two criteria. However, I am using a gold star to highlight one property that only a few papers have: a generalizable explanation for why the results occurred. You can read more about explanatory hypotheses here.

Continue reading

by