Skip to main content

ReVISit v2.0: Making your Studies even more Powerful!

· 6 min read
The ReVISit Team

It's the start of a new year and we're excited to announce the release of reVISit v2.0 — just in time for your VIS 2025 submissions! We've been working hard to bring you a new and improved version of reVISit, and we can't wait for you to try it out.

There are a lot of new features in this release, so let's dive in and take a look at what's new:

Feature Highlights

Participant Replay

Ever wondered where your participants clicked when they completed your study? We've enabled participant replays, so you can now replay the interactions of participants in your studies during analysis. This enables you to see how participants are interacting with your study, either to discover issues in a pilot, or to actually analyze interaction behavior.

Check out the demo and the documentation. To enable replay, your study stimulus has to track provenance.

Vega and Vega-Lite Support

We've added compatibility for Vega and Vega-Lite visualizations as a component type, so you can now include these types of visualizations in your studies, leveraging the power of the reVISit platform. We also support tracking interactions in Vega visualizations, enabling you to inspect and analyze how participants used your stimulus with the aforementioned participant replay.

Check out the demo, and example of a replay and the documentation.

Recording Participant Audio

We've added support for recording participant audio, enabling you to run think-aloud studies—even in crowdsourced settings. This is a great way to gain insight into participants' thought processes and decision-making strategies and represents our latest effort to support qualitative research in reVISit. Audio recordings are automatically transcribed and part of your regular data download. You can even listen to the audio while watching the interactions of your participants play out.

Check out the demo and the documentation.

Libraries

Should we test for color blindness? What are our participants’ visualization literacy scores? How do participants rate the aesthetics of a visualization? We commonly ask these and similar questions, and often we use validated existing forms or methodologies to answer them. Re-implementing components is time-consuming and error prone. To address this, we've added support for libraries, so you can leverage prebuilt study components to create your own studies.

Libraries can save time and effort when creating studies, as you can reuse components that have already been created by others. You can also share your own components with the community by creating your own library and submitting a pull request.

At launch, we have implemented nine libraries, ranging from simple checks, to questionnaires, to visualization literacy tests. Check out the demos and the documentation.

Python Bindings

Writing JSON can be difficult. Who wants to deal with a study config with 20,000 lines? To make it easier to create large and complex studies, we've implemented Python bindings for the reVISit spec, reVISitPy, that allow you to interact with reVISit programmatically. Here’s a basic example of how that works.

There are many promising things you can do with reVISitPy:

First, we implemented a widget that lets you run the study you created from inside the Jupyter notebook. Now you have a fast way of inspecting experiments, randomization, etc.

Next, you can run through the study and download the data you generated straight into your notebook, so that you can immediately see whether you’re actually collecting all the data you need, and even pilot your analysis!

Finally, with the Python bindings we also enable you to design arbitrarily complex studies from permutations. What does that mean? Say, you have a stimulus, such as a scatterplot, that can render any compatible dataset, and you want to test how good participants are at judging correlations. You might want to feed in hundreds of different datasets that you’re automatically generating. If you were to do this in the reVISit JSON spec, it would be very painful, because you have to create components for every single stimulus. In reVISitPy, it’s just a few lines of code. And of course, this isn’t limited to passing data into a component, you can premute over tasks, visual stimuli, or even phrasings of your questions, etc.

Check out the documentation, the examples, and the reVISitPy repository to learn more about how to use it.

Other Features / Changes

  • Improved User Interface: We've redesigned the user interface to make it more intuitive and user-friendly. You'll find it even more pleasant to use, with a cleaner and more modern look.
  • Forms have gotten some attention. See the demo and the documentation:
    • We introduced new matrix form elements, which are useful if you want to ask e.g., Likert questions for many different items.
    • Forms look nicer after an aesthetics overhaul.
    • We introduced dividers to section forms.
    • You can allow “I don’t know” as an option for most form elements.
    • You can allow “Other” options for checkboxes and radios.
    • You can choose horizontal and vertical layouts for checkboxes and radios.
  • You can now design trainings where participants can validate answers
  • Data export has improved, including things like participant numbers, clean time (time on task minus task where the browser tab was not active), etc.

These new features represent several months of work from the reVISit team, and we’re excited to share them with the community. We’re aiming to make reVISit more versatile, powerful, and easy to use. As always, we welcome your feedback and ideas for how we can support new directions for research in visualization and interactive systems. The best way to get in touch is to join our Slack Team!

We’re also ready to go on the road and meet you at your institution, or offer a virtual workshop! We recently visited Georgia Tech’s GVU center to give a hands-on overview of reVISit. Catch our upcoming workshop at CHI in Japan. Please reach out if you’re interested in learning more about reVISit or potentially hosting a workshop.

ReVISit v1.0: Taking Control of Your Online Studies!

· 7 min read
The ReVISit Team

Diagram of the revisit workflow. The study specification and components are used to compile an interactive, web-based study. As participants complete the study data is stored in Firebase and can be downloaded as tabular or JSON files, for subsequent analysis in analytics tools.


You might have heard of reVISit before from our paper, or you might have seen a talk or participated in a meetup. But as of today, we’re excited to give the community a chance to run your own studies with reVISit with our 1.0 release – and CHI is just around the corner!

What is reVISit?

ReVISit is a software framework that enables you to assemble experimental stimuli and survey questions into an online user study. One of the biggest time-saving features of ReVISit is a JSON grammar, the reVISit Spec, used to describe the setup of your study. Stimuli are contained in components and can be either markdown, images, web pages, React components, or survey questions. The figure at the top shows the relationship of the reVISit Spec, the components, and how they are then compiled into a study.

Due to the different types of components, you can use reVISit for a diverse set of studies, spanning simple surveys, image-based perceptual experiments, and experiments evaluating complex interactive visualizations.

ReVISit is designed to accommodate sophisticated stimuli and study designs. Suppose you want to replicate the seminal Cleveland and McGill study. With reVISit you could implement a React-based set of visualizations (a bar chart, a stacked bar chart, a pie chart), and then pass parameters, such as the data, and the markers to highlight the marks, all via the study configuration.

Similarly, the reVISit spec enables designers to create controlled sequences defining in which order participants see stimuli. reVISit supports fixed, random, and Latin square designs that can be nested at various levels. For example, the overall study sequence (intro, training, experiment, survey) could be fixed. In the experiment arm, two conditions could use a Latin-square design. Within each condition, the experiment could randomly draw a small number of stimuli from a large stimuli pool while interspersing attention checks at random points and adding breaks.

Assembling and Deploying your Study

The components and your study configuration are then used to assemble a web-based study. You can first look at your study on your local browser, and if you want to share it deploy it to the web server of your choice. We recommend and document deploying to GitHub pages, but any web server you have access to will do.

You can then use the online version to direct participants to your study. You can use crowdsourcing platforms such as Prolific, Mechanical Turk or LabintheWild, or you can simply send a link to participants that you have recruited in other ways.

Data Collection

A typical study will have response fields, such as a text field or a slider, to provide the response. Such form-based responses are tracked by reVISit by default and can be downloaded in either JSON or a tidy tabular format. Similarly, you can provide response data out of interactive stimuli. For example, if a task is to click on a specific bar in a bar chart, you can log which bars were clicked. ReVISit tracks a diverse set of browser window events such as mouse moves, clicks, scrolls, resizes, which are time-stamped and can hence be used for basic log file analysis.

ReVISit also supports advanced provenance tracking based on trrack a provenance tracking library developed at our lab. If you instrument your study stimuli with trrack, you can recreate every state of your interface of every single participant! This can be incredibly useful to understand nuances of user behavior, as well as to help you debug your stimuli by exploring what went wrong in a particular session. In a future release, reVISit will also allow you to dynamically browse these events and fully “re-hydrate” all participants experiments.

Data Storage

ReVISit is implemented as a (mostly) server-less application, meaning that you don’t have to run, secure, and maintain a server to use reVISit. The only exception to this is data storage, as obviously, the data of online participants has to be stored somewhere.

If you’re running a local study, you can get away without this – you can just download the data from your browser after a study is complete. For online studies, we use Google Firebase to store data.

Currently, setting up Firebase for a reVISit study might be the most challenging part of working with reVISit. On the plus side, Firebase is a tried-and-true system where you have full control over your data. You even have options to choose the locale of your server so that you are compliant with your country's regulations on data storage.

Data Analysis

ReVISit is not meant to replace your usual data analysis approaches. Instead, it aims to make it easy to export data in the formats you might use in R, Python, or your analysis platform of choice.

ReVISit, however, does provide a basic analytics interface that is most useful for monitoring the progress of your study. You can also use reVISit to identify participants that didn’t appropriately complete the study and reject them, which is most useful if you want to ensure that you have appropriate numbers of participants in your Latin square design.

What are the Benefits of Using reVISit?

So, why would you use reVISit over other approaches to running your study, such as Qualtrics, Survey Monkey, or even a custom experiment interface?

First, reVISit is open source with all the benefits you have of using open source software: it’s free; you can extend it; you can contribute to improving it.

Second, the open source nature and our approach of forking reVISit for your own study and storing your data in your own Firebase means that you have full control over your study and the data. Once you have forked the study, it will remain accessible and unchanged for as long as you like.

Third, reVISit has dedicated modes for quickly navigating your study, and you can also turn off data collection. This is great for both, developing your study, but also sharing your study with reviewers and readers of your research. That means that readers can see exactly what your participants saw, and hence may trust your study more. They could also fork your study and run a replication of your study with minimal effort! You can check out an example study and the associated results.

I’m Intrigued, but Can I Trust it for my Experiment?

reVISit is new, and we know that it’s fraught to bet on a new project if you don’t know whether it actually works or whether it will be maintained down the line. But we hope we can convince you to trust us!

First, we currently have multiple years of funding to continue development of reVISit. We’ve also ourselves run several successful studies, such as a study on guardrails against misinformation. Finally, we are committed to help you out if you run into issues! Join our slack team to get low-friction help, or write to us at contact@revisit.dev. We’re also happy to set up a meeting to answer any questions you may have; for example, to talk us through whether reVISit will work for your study design.

How Can I Learn More or Get Involved?

We’re grateful to all the community members who have shared their study needs and helped to make ReVISit 1.0 a reality, and we’re looking forward to bringing the community exciting new features in the coming year. Future releases will include better debugging tools through study rehydration, a way to capture and code think-aloud data, and improved analysis capabilities. Depending on community feedback we're also interested in branching out to unconventional display devices (phones, AR/VR, etc.)

To take your first steps with reVISit, check out our getting started guide for instructions on how to install our software and to build a study.

Finally, if you are missing a feature or find a bug, let us know! Since reVISit is completely open source you could even submit a pull request!

Acknowledgements

We are very grateful to everyone who helped make reVISit a reality, including our wonderful community advisory board and the National Science Foundation for generous funding.