"Variations Augment Real Iterative Outcomes Letting Information Transcend Exploration"

Variolite is a package for Atom that allows programmers to locally version their code. This system appeared in CHI 2017 and got a "Best Paper Honorable Mention" for the paper:

Mary Beth Kery, Amber Horvath and Brad A. Myers. “Variolite: Supporting Exploratory Programming by Data Scientists,” Proceedings CHI'2017: Human Factors in Computing Systems, Denver, CO, May 6-11, 2017. [pdf]

Abstract: "How do people ideate through code? Using semi-structured interviews and a survey, we studied data scientists who program, often with small scripts, to experiment with data. These studies show that data scientists frequently code new analysis ideas by building off of their code from a previous idea. They often rely on informal versioning interactions like copying code, keeping unused code, and commenting out code to repurpose older analysis code while attempting to keep those older analyses intact. Unlike conventional version control, these informal practices allow for fast versioning of any size code snippet, and quick comparisons by interchanging which versions are run. However, data scientists must maintain a strong mental map of their code in order to distinguish versions, leading to errors and confusion. We explore the needs for improving version control tools for exploratory tasks, and demonstrate a tool for lightweight local versioning, called Variolite, which programmers found usable and desirable in a preliminary usability study."


In our lab we are continuing to study how people program to do exploratory work with data, and continuing to design new kinds of code tools. Here are some related publications so far:

Mary Beth Kery and Brad A. Myers. “Exploring Exploratory Programming,” Proceedings VL/HCC'2017: Visual Languages and Human Centered Computing, Raleigh, NC, 2017. [pdf]

Abstract: "In open-ended tasks where a program’s behavior cannot be specified in advance, exploratory programming is a key practice in which programmers actively experiment with different possibilities using code. Exploratory programming is highly relevant today to a variety of professional and end-user programmer domains, including prototyping, learning through play, digital art, and data science. However, prior research has largely lacked clarity on what exploratory programming is, and what behaviors are characteristic of this practice. Drawing on this data and prior literature, we provide an organized description of what exploratory programming has meant historically and a framework of four dimensions for studying exploratory programming tasks: (1) applications, (2) required code quality, (3) ease or difficulty of exploration, and (4) the exploratory process. This provides a basis for better analyzing tool support for exploratory programming. "

More to come!