A6: “Wizard of Oz” Prototype for Gesture-Controlled Netflix (Behavioral Prototyping)

Netflix (Behavioral Prototyping)

For A6, my group and I “created” and tested a gesture-controlled system for Netflix. The catch? There is no such system. Using Wizard of Oz Prototyping, we made it appear that a camera was picking up our test participants’ gestures—when in reality, we were behind the scenes controlling the whole thing in real time. Using this technique, we tested out the effectiveness of the gestures we made without investing in expensive, complicated tech.

These are the requirements for A6 from the spec:

Gesture recognition platform: a gestural user interface for an Apple TV or similar system that allows interaction through physical motions. An example prototype would be controlling basic video function controls (play, pause, stop, fast forward, rewind, etc.). The gestural UI can be via a 2D (tablet touch) or a 3D (camera sensor, like Kinect) system.

Your prototype should be designed to explore the following design research and usability questions:

  • How can the user effectively control the interface using hand gestures?
  • What are the most intuitive gestures for this application?
  • What level of accuracy is required in this gesture recognition technology?

Additionally, our group consisted of four people with the following roles:

  • Facilitator: someone to direct the testing, communicate with the user, and orchestrate the session.
  • Wizard: probably at least two people to be the wizards behind the curtain. This will depend on exactly how you are going to attempt to fool your user, but there will likely be some manipulation of your prototype that the user does not see in order to accomplish the real-time reaction to his/her actions.
  • Scribe: someone to capture notes on the user’s actions, what happened, how the prototype performed, etc.
  • Documentarian: someone to capture the user test on video.

And, as with the previous prototype posts, click any image to get a closer look.

The Design

The Initial Design

For our gesture-controlled Netflix system, we made it appear as if an external camera was picking up the participants’ motions. However, we were actually just broadcasting Netflix using an Xbox and controlling the video with hidden cameras and the Xbox controller.

In our test, what we were really seeking to understand was the gestures themselves. Criteria for success included accuracy (can the users make their intended selections?), effectiveness (how quickly are uses able to make their target selections? Are the gestures intuitive, or do they lead the user to make mistakes?), and satisfaction (do users like this method of control?)

For our test we collected mostly qualitative observational data, but also asked some interview questions at the end of the test to get the participants’ opinions. As the scribe, it was my job to take notes on the participants as they completed the test, documenting both their actions and reactions.

Each participant was given a sheet covering the basic controls. We decided that instead of walking the participants through the gestures, we would let them figure out what each gesture meant, and how big or small they thought the gestures should be. This way, we could better understand what a person’s natural tendencies are when using these gestures.

Eventually we decided on these gesture controls:

  • Play: pointing at the screen
  • Pause: Open hand, palm towards screen
  • Fast forward: swiping to the users right
  • Rewind: swiping to the users left
  • Stop (back to menu): closed fist

Figuring the fast forward/rewind options were a bit challenging, for several reasons. Firstly, we weren’t sure how fast or slow to skip relative to the gestures. Eventually we decided that one wave of the hand would skip one “frame” (the equivalent of flicking the controller joystick once), and continually waving and/or holding the gesture would cause the video to skip more quickly (the equivalent of holding the joystick down, either to the left or right). Another problem we encountered was which direction a person should gesture. While I and others on my team instinctively thought that moving the hand to the right should be fast forward, as if your hand is moving forward, or you are waving someone forward. However, some in our group thought that fast forward would be to the left, as if you are dragging/sliding the video playback ribbon. Eventually, however, we decided to have the fast forward be swiping to the right.

We also came up with a simple script for our facilitator to follow during the test, which is as follows:

Hi! Welcome and thank you for participating in our prototype study. We are testing a new gestural control system that uses the camera you see on the top of the television to interpret gestures to control Netflix. (Point to camera)

This is going to be a really brief test. I’m going to start by giving you  a handout with the basic controls illustrated. Please review it and let me know if you have any questions. (Few seconds)

Throughout this test we would love you to think aloud, sharing your thoughts and impressions of the system and the process. I’ll have three small tasks that I ask you to do, then we’ll wrap up with a quick interview. Did you have any questions before we get started?

Task 1: Get to the middle of the video and press play. (Wait a few seconds) Okay, now pause the video.

Task 2: Oops you went a little too far! Go back a few minutes and press play.

Task 3: Okay great, now you can go ahead and stop the video.

  1. Was using gestures comfortable? Did it feel natural?
  2. Were you confused at any point?
  3. Recommendations for gestures. Would you make any changes?

We ran the test itself twice—once with a pilot participant, and a second official one, which appears in the video. Both participants were young adult females with no prior knowledge of Wizard of Oz prototyping.

The Setup

For the physical setup of the space, we set up in our teammate (the wizard’s) apartment, since he had the Xbox we needed for the test. We used the Xbox connected to the tv, as well as a hidden camera so the wizard could see the participant. The wizard hid in an adjacent hallway out of sight with a laptop and the Xbox controller, while the facilitator, scribe and documentarian were in the same room as the participant.

When the participants walked into the room, they were unaware of the presence of the wizard. We had Netflix setup for them, with an episode of Parks and Recreation open but paused.

Authenticity: Maintaining the Trick

In order to make the system as convincing as possible, our team added and/or tweaked a few details here and there. When we were first coming up with ideas for our prototype, we considered connecting a laptop to a television and playing Netflix from there. However, I pointed out that if we did that, the cursor and navigation bars would be visible, making the system less believable, or harder to control depending on how we wanted to design the test. I then suggested a gaming console instead, which turned out to be a good choice.

For the test sessions, I brought in my webcam, positioned it on top of the television, and made sure that the participant was aware of it. The webcam itself wasn’t plugged into anything, but appeared to be functional to the participant. I thought that by designating a (fake) sensor, the participant would be able to focus on and engage with the “sensor,” and would not suspect the Xbox.

Our group also made some decisions involving the wizard’s controls that increased the believability of the system. To ensure that our wizard relied solely on responding to a participant’s gestures, we had it set up so that he could not see the screen or hear what was happening in the room—he therefore had to rely solely on the video feed of the participant’s gestures to control the system. In addition to this, after our pilot participant, we came up with the idea that the wizard should continue to respond to a participant’s gestures even after the official test was over, when we asked the participant questions about their experience. After all, a real system would continue to run if it wasn’t turned off. By adding this detail, our system became even more convincing, especially when one of our participants tried to mimic the gestures to answer our questions.

In addition to this, the wizard had difficulty at times differentiating between a fast forward or rewind swipe, because the video he was being broadcast was mirrored, and thus backwards from the original. To ensure the accuracy of the wizard, the wizard used a map hanging on the wall behind the participant on the wall as a guide to figure out which direction he should move the joystick.

Analysis After Testing

Was the participant convinced? Yes. Our prototype allowed us to successfully evaluate our gesture system. Our non-pilot participant noted that the movements were a bit “awkward” at first, but that it got better once she got used to it. She did have issues with the fast/forward and rewind features, remarking that it “goes too fast,” and that she felt like she had to “stop and go again,” to get to the right place. Additionally, she confused the stop and pause functions, performing the pause gesture in stead of the stop gesture at one point. She also suggested that if someone wanted to increase or decrease the speed of fast forward or rewind, they could use bigger/smaller gestures to wave. This was something I also noticed—I think our group could have done a better job defining what constituted a small or big skip when fast forwarding or rewinding. These comments and observations suggest that our gestures could use some refining.

If we were to run another test, there are other things we might consider. For instance, during the tests our participants had the tendency to stand up, even though if they were using this system in real life, they’d likely be sitting. In the future, we may want to tell the participants explicitly that they should sit for the entire test. Additionally, it may be beneficial for us to add some sort of activation feature, or an on/off gesture, so that the user wouldn’t have to be conscious of accidentally triggering the gesture control.

For the video and test itself, we received feedback from the peers in our class. Our classmates liked how we continued to respond to the gesture controls even after the actual test for authenticity. They also appreciated how our video was well explained, and described the set up and test in full. However, they found the background music distracting.

As for myself, in the future I would probably rework the video to be more presentable, visually appealing, and would scratch the background music. Additionally, I would like to refine our gestures, and maybe do more research in order to increase their intuitiveness.

x

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s