The Drawing Apprentice


Project Overview

feature img

 

Collaboration is known to push creative boundaries and help individuals sustain creative engagement, explore a more diverse conceptual space, and synthesize new ideas. While the benefits of human collaboration may seem obvious, the cognitive mechanism and processes involved in open-ended improvisational collaboration are active areas of research. Our research group has developed a co-creative drawing partner called the Drawing Apprentice to investigate creative collaboration in the domain of abstract drawing. The Drawing Apprentice draws with users in real time by analyzing their input lines and responding with lines of its own. With this prototype, we study the interaction dynamics of artistic collaboration and explore how a co-creative agent might be designed to effectively collaborate with both novices and expert artists. The prototype serves as a technical probe to investigate new human-computer interaction concepts in this new domain of human-computer collaboration, such as methods of feedback to facilitate learning and coordination (for both the user and system), turn taking patterns, and the role control and ambiguity plays in effective collaboration.

My Role

UI Design, Visual Design, User Research, Prototyping, Usability Testing

 
Design Roadmap-01
 

Collaborators

Brian Magerko (Advisor), Nicholas Davis,Chih-Pin Hsiao, Kunwar Yashraj Singh, Rapha Gontijo Lopes

 
 

How it works

DApp Software Flow_v2

 
 
 

Research Areas

My research on the Drawing Apprentice’s UI and interaction design has two areas of focus: 1. from the creative agent’s perspective, and 2. from the user’s perspective. First of all, the project will identify the interface and interaction designs that most effectively reduce the learning time for the AI in an open-ended real time scenario. Secondly, the project will explore what interaction and UI designs best increase user’s perceived quality of collaboration, affection to AI, and fun.

Precedents

Analyze and study existing drawing apps such as Draw Something, Sketchpad, Sketchbook Pro, Tayasui Sketches, Photoshop/Illustrator, and Google Artificial Intelligence’s Dreams

Market Analysis

Identify existing product landscape for drawing app/game; identify a potential market space and user group for the Drawing Apprentice.

Evaluation Methods

Identify the optimal evaluation methods to conduct studies for the interface, including but not exclusive to: personas, retrospective studies, cognitive walk-through, game mechanics, A/B testing, interview, and questionnaire.

Expert Review

Interview faculty specialized in game methods at Georgia tech on the interview methods and

Interaction Design

Explore and identify the proper animation for the UI elements and creative agent behaviors to most efficiently provide visual feedback to the user.

UI Design

Design and evaluate UI components in iterative cycle, such as the interaction design choice for feedback systems (binary or continuous), aesthetic choice for the menu panel, and visual design choice for the controlling the agent’s creativity level.

AI System Feedback Mechanism

Evaluate feedback mechanism that is easy to use, fun to interact with, so that the user provides enough data to teach the system. It is important to measure user experience, and computational efficiency in this stage. In this stage, I will explore alternative types of feedback input, such as biophysical feedback, eye tracking, motion-detection, etc.

 
 

Related Work

I explored the different design alternatives for game mechanics and interaction design for using human computation to solve a really hard AI problem–creativity. I did research on the general mechanics and interaction principles in creativity-support tools such as Tayasui Sktech, Fresh Paint, Paper by FiftyThree, Draw Something, etc.

Picture for Portfolio Page_related software-01

 
 

1st Prototype

The first prototype for the Drawing Apprentice with conventional toolkit panel with binary “vote-up” and “vote-down” feedback, with conventional toolkit for drawing apps such as color palette, brush stroke, opacity. I built this prototype using HTML, CSS and JavaScript.

 
 

1st Prototype Usability Testing

From initial Usability Testing with Prototype I, the team identified several pain points:

1. the buttons “global”, “regional” and “local” are confusing to users.

2. “Grouping” button is confusing to users.

3. Some of the UI widget are only “placeholders”, causing confusion to users.

As a result, I scratched the initial interface I designed. After all, design process is about first design-iteration-user study- iteration-better design. I created a new series of interface mockups, taking into consideration the AI component and other game mechanic such as turn-taking. We also added new features (creativity knob for AI) and eliminate ones we deem as “confusing” for users (“Mode” buttons).

 
 

2nd Prototype

Proceeding the initial findings from the paper prototype study, I came up with several low-fidelity mock-ups.I designed four sets of paper wireframes as an initial study of how people understand and use a drawing app with an AI. Below are some sketches I did using simple tools like post-it note and markers. I did quick user studies on 3 Georgia Tech students, during which I asked them to conduct a think-aloud protocol as they “navigate” through the prototype. I asked them questions regarding icon design, placement of elements, ease of navigation, and general perception of the app. Below are several snapshots of one paper prototype for the usability testing session.

DApp
AIAsGlowingCircle-01

 
 

3rd Prototype

 
 

3rd Prototype Usability Testing

Usability Testing

The team conducted a usability study to evaluate whether the Drawing Apprentice system facilitated and sustained creative engagement in a similar manner as human collaborator. We wanted to investigate to what extent users can work effectively with the Drawing Apprentice in a way that enables the user to interactively and co-creatively build artistic meaning as the artwork develops.

For this study, we had 7 participants, 4 female, and 3 male with an average age of 25 (ranging from 20-45) recruited from the student population at Georgia Tech. The artistic experience of the group was generally categorized as novice, with an average of 2.15 on a 5 point scale ranging from no artistic experience to 5 years of professional practice in the field. The experiment was divided into two phases that each included a 12-minute collaborative drawing task, a retrospective protocol analysis, and a survey about the participant’s experience interacting with the system. The experiments were conducted using a Microsoft Surface tablet and a capacitive pen as input to the device.The Drawing Apprentice system was running as a web application and expanded to full screen.

IUI Participants-01
 

“I definitely up voted a lot in the first time, and I down voted a lot more this time. I wanted to try to get rid of the fast lines I couldn’t see, which kind of happened….I tried to discourage when it did really really shaky stuff. It kind of helped.”
 
— Anonymous Participant

 
 

“It definitely takes several iterations of a down vote for it to figure out exactly what you are trying to discourage, like the line placement, or what type of thing you are trying to discourage, because there probably several things it considers when trying to place a line.”
 
— Anonymous Participant

 

Design Recommendations

Our findings indicated that users did not fully understand how using voting and the creativity slider affected the behavior of the system. The system needs to be more explicit about how users’ votes affect the agent’s knowledge and drawing behavior, i.e. using pop-up dialog box or animation as confirmation alerts. Also, providing a more ambiguous evaluation scale (versus the current binary like/dislike) might disambiguate user feedback. Providing independent feedback on the location, style, and content of the agent’s drawing contribution.

 
 

4th Prototype Usability Study

human-ai diagram

 
 

Usability Testing Design

Study Design

This usability study is a follow-up to the initial two usability studies. At this phase, the main focus is to have users evaluate the UI elements and share their overall drawing experiences. After fine-tuning and developing the algorithms for the past year, we want to evaluate whether different interface elements improve or detract from the ability to accurately model what the agent is trying to do. Will there be an Eliza effect in the system? Do the users have a preferred way of interacting and communicating with the agent?

This study has three main parts. In the first part, users are asked to interact with 2 existing interfaces and do a think-aloud walkthrough to evaluate both interfaces. In the second part, the users are asked to do a short open-ended drawing session with their preferred version of the interfaces from the 2. In the third part, the users are asked to look at several high-fidelity prototypes of potential interface designs and give feedback.

 
 

Usability Study Sessions

Button GIF
slider GIF
Iterative Design Layout
IMG_9400
IMG_9477
IMG_9497

High-Fidelity Mockup Feedback Session

Explanation Tutorial