From a User Study to a Valid Claim: How to Test your Hypothesis and Avoid Common Pitfalls

Niels H.L.C. de Hoon, Elmar Eisemann, Anna Vilanova

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review


The evaluation of visualization methods or designs often relies on user studies. Apart from the difficulties involved in the design of the study itself, the existing mechanisms to obtain sound conclusions are often unclear. In this work, we review and summarize some of the common statistical techniques that can be used to validate a claim in the scenarios that are commonly present in user studies in visualization, i.e., hypothesis testing. Usually, the number of participants is small and the mean and variance of the distribution are not known. Therefore, we will focus on the techniques that are adequate within these limitations. Our aim for this paper is to clarify the goals and limitations of hypothesis testing from a user study perspective, that can be interesting for the visualization community. We provide an overview of the most common mistakes made when testing a hypothesis that can lead to erroneous claims. We also present strategies to avoid those.
Original languageEnglish
Title of host publicationEuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3)
EditorsKai Lawonn, Noeska Smit, Douglas Cunningham
PublisherThe Eurographics Association
Number of pages4
ISBN (Electronic)978-3-03868-041-3
Publication statusPublished - 2017
Event EuroRVVV17: 5th EuroRV³ Workshop - Perception in Visualization - Barcelona, Spain
Duration: 12 Jun 201713 Jun 2017
Conference number: 5


Conference EuroRVVV17
Internet address


Dive into the research topics of 'From a User Study to a Valid Claim: How to Test your Hypothesis and Avoid Common Pitfalls'. Together they form a unique fingerprint.

Cite this