reVISit: Scalable Empirical Evaluation of Interactive Visualizations
The reVISit project addresses a critical bottleneck in visualization research: how can we better and more efficiently empirically evaluate visualization techniques? The reVISit infrastructure aims to democratize evaluation of interactive visualization techniques, an area that has been under-explored, due in part to the high technical burden and skills required to create complex online experiments.
The key innovations of this project are:
(1) Software infrastructure for flexible study creation and instrumented data collection, including interaction provenance, insights, and rationales, compatible with online crowdsourced study contexts.
(2) Software infrastructure to wrangle the results data into formats compatible with off-the-shelf analysis tools, and advanced software infrastructure to analyze these diverse data streams that can be used for piloting, quality control, and analyzing usage types, insights, rational, and performance.
These methods will allow visualization researchers to gather empirical evidence about the merits of different interactive visualization techniques. It will allow researchers to understand the types of insights that different techniques support, revealing diverging analysis strategies users may take. Ultimately, these methods will enable a wider set of visualization researchers to run a much broader range of experiments using crowdsourcing than before.
Demo
You can check out a few example projects on our demo page. All of the demos on this site are build from stimuli and examples that you can find in the github repo.
Check out the getting started tutorial to learn how to build your own experiment.
Paper
For a concise description of the project, check out the short paper.
Carolina Nobre, Alexander Lex, Lane Harrison
reVISit: Supporting Scalable Evaluation of Interactive Visualizations
IEEE Visualization Short Papers, to appear, 2023.
Frequently Asked Questions
-
Is reVISit ready for me to use?
Yes! We’re looking for early adopters. Things are still evolving, so don’t expect a rock-solid, stable framework yet, but we’d love to work with you to deploy your study!