ReVISit v2.0: Making your Studies even more Powerful!
It's the start of a new year and we're excited to announce the release of reVISit v2.0 — just in time for your VIS 2025 submissions! We've been working hard to bring you a new and improved version of reVISit, and we can't wait for you to try it out.
There are a lot of new features in this release, so let's dive in and take a look at what's new:
Feature Highlights
Participant Replay
Ever wondered where your participants clicked when they completed your study? We've enabled participant replays, so you can now replay the interactions of participants in your studies during analysis. This enables you to see how participants are interacting with your study, either to discover issues in a pilot, or to actually analyze interaction behavior.
Check out the demo and the documentation. To enable replay, your study stimulus has to track provenance.
Vega and Vega-Lite Support
We've added compatibility for Vega and Vega-Lite visualizations as a component type, so you can now include these types of visualizations in your studies, leveraging the power of the reVISit platform. We also support tracking interactions in Vega visualizations, enabling you to inspect and analyze how participants used your stimulus with the aforementioned participant replay.
Check out the demo, and example of a replay and the documentation.
Recording Participant Audio
We've added support for recording participant audio, enabling you to run think-aloud studies—even in crowdsourced settings. This is a great way to gain insight into participants' thought processes and decision-making strategies and represents our latest effort to support qualitative research in reVISit. Audio recordings are automatically transcribed and part of your regular data download. You can even listen to the audio while watching the interactions of your participants play out.
Check out the demo and the documentation.
Libraries
Should we test for color blindness? What are our participants’ visualization literacy scores? How do participants rate the aesthetics of a visualization? We commonly ask these and similar questions, and often we use validated existing forms or methodologies to answer them. Re-implementing components is time-consuming and error prone. To address this, we've added support for libraries, so you can leverage prebuilt study components to create your own studies.
Libraries can save time and effort when creating studies, as you can reuse components that have already been created by others. You can also share your own components with the community by creating your own library and submitting a pull request.
At launch, we have implemented nine libraries, ranging from simple checks, to questionnaires, to visualization literacy tests. Check out the demos and the documentation.
Python Bindings
Writing JSON can be difficult. Who wants to deal with a study config with 20,000 lines? To make it easier to create large and complex studies, we've implemented Python bindings for the reVISit spec, reVISitPy, that allow you to interact with reVISit programmatically. Here’s a basic example of how that works.
There are many promising things you can do with reVISitPy:
First, we implemented a widget that lets you run the study you created from inside the Jupyter notebook. Now you have a fast way of inspecting experiments, randomization, etc.
Next, you can run through the study and download the data you generated straight into your notebook, so that you can immediately see whether you’re actually collecting all the data you need, and even pilot your analysis!
Finally, with the Python bindings we also enable you to design arbitrarily complex studies from permutations. What does that mean? Say, you have a stimulus, such as a scatterplot, that can render any compatible dataset, and you want to test how good participants are at judging correlations. You might want to feed in hundreds of different datasets that you’re automatically generating. If you were to do this in the reVISit JSON spec, it would be very painful, because you have to create components for every single stimulus. In reVISitPy, it’s just a few lines of code. And of course, this isn’t limited to passing data into a component, you can premute over tasks, visual stimuli, or even phrasings of your questions, etc.
Check out the documentation, the examples, and the reVISitPy repository to learn more about how to use it.
Other Features / Changes
- Improved User Interface: We've redesigned the user interface to make it more intuitive and user-friendly. You'll find it even more pleasant to use, with a cleaner and more modern look.
- Forms have gotten some attention. See the demo and the documentation:
- We introduced new matrix form elements, which are useful if you want to ask e.g., Likert questions for many different items.
- Forms look nicer after an aesthetics overhaul.
- We introduced dividers to section forms.
- You can allow “I don’t know” as an option for most form elements.
- You can allow “Other” options for checkboxes and radios.
- You can choose horizontal and vertical layouts for checkboxes and radios.
- You can now design trainings where participants can validate answers
- Data export has improved, including things like participant numbers, clean time (time on task minus task where the browser tab was not active), etc.
These new features represent several months of work from the reVISit team, and we’re excited to share them with the community. We’re aiming to make reVISit more versatile, powerful, and easy to use. As always, we welcome your feedback and ideas for how we can support new directions for research in visualization and interactive systems. The best way to get in touch is to join our Slack Team!
We’re also ready to go on the road and meet you at your institution, or offer a virtual workshop! We recently visited Georgia Tech’s GVU center to give a hands-on overview of reVISit. Catch our upcoming workshop at CHI in Japan. Please reach out if you’re interested in learning more about reVISit or potentially hosting a workshop.