For A6, we were tasked with creating interactive wireframes. Specifically, the spec is as follows:
For this assignment, you will choose a design challenge from the options below and create a web and mobile app component, and develop interactive wireframe prototypes of each. (You only need to make it clickable, you do not need to include the macro-interactions.)
For the design challenge you may choose from one of the following three application areas. In all cases, assume that your target user is a college student, and that your design goal is to create a system that makes the experience more fun, more meaningful, more efficient, or any combination of these goals. You may decide the features or goals, but we have included a few examples that you may use. You only need to create ONE end-to-end user task. For example: the user flow of a college student who needs to create and submit a reservation for a music room on campus.
Pet Adoption: process for pet adoption, finding a pet to adopt, keeping track of pet adoption process, donate to shelter, etc.
Music Rehearsal or Library Room reservation: creating or changing reservations, searching for rooms, filtering, possibility of reserving materials, etc.
Task management: creating and managing personal to-do lists, for academic or general life, anti-procrastination techniques, etc.
I chose to do an app/website pair for pet adoption, specifically adopting a dog, using Google material design guidelines.
The City Humane Society website for desktop and mobile prioritizes four main functions: adopting a pet, volunteering, donating to the humane society, and an about page for help/information about the organization as well as the site. In addition, there are also login/profile management pages. While the entire site is mapped out as follows below, I focused only on the adopt-a-dog path, as I mentioned earlier.
For my design specifically, ignoring the other pages that I did not focus on, a user would:
Start on the Home page, move to the “Adopt” page, which lists categories of animals
Move to the “Dogs” category page, which lists different dog profiles
Move to a specific “Dog Profile” page, with expanded information on the dog
Tap the “Meet Me” button, which would lead some form of external communication, such as an email or messaging system to contact the humane society and set up a meeting with the animal.
The site makes use of a flat-minimalist design. I repeated elements such as thin, grey lines for dividers, and the image placement/layouts across desktop and mobile share similar visual structures. As the main goal of this site is for adopting a pet, I used a tiled/repeated block layout with elements that are easy to add or remove without messing up the layout—this is primarily to accommodate changes to the site on the humane society’s end, such as adding or removing a dog listing.
The simple design allows the content, specifically images of the actual pets, to stand out. Most other images besides the pictures of the pets I envisioned to be basic, flat vector-based SVGs, to fit with the overall minimalist design as well as to provide more contrast with the pictures of adoptable pets.
The navigation and tab bars are placed to be as unobtrusive as possible, while also following material design. Here, the desktop and mobile components differ quite a bit in order to best suit the viewing method. For the desktop version, the navigation bar appears at the top and follows that of a traditional navigation bar, listing all the appropriate major page links, so that a user can easily flip between pages. The mobile component makes use of a tab bar with fewer links, and is more suited for focused, singular, shorter tasks. Additional links and functions are minimized to make use of small screen real estate and to reduce the risk of overwhelming the user.
For the City Humane Society website, I imagined that the typical user would be someone eighteen years old or older, who probably has an idea of what type of pet they would like, but may not have any specific characteristics in mind. This person also likely doesn’t visit the site very often—if the main purpose of the website is to provide an portal for pet adoption, a person would only be using the website as long as they were looking for a pet, and would likely not return for a while, if at all. Therefore, their interaction with the site should be simple, straightforward and fast, with little onboarding. Now, usually more research would have to be done to ascertain the type of user, but for this short assignment I mainly went off of speculation on what a typical user would look like.
On the flip side, I envisioned that the “City Humane Society” would be a smaller-scale humane society, probably more local with fewer frills and extra services, much like the West Columbia Gorge Humane Society, a small one from my home town of Washougal (it’s also the only one of those animal shelters that I have actually been to.) This also means a smaller capacity for pets, and therefore less of a need to organize large numbers of animal profiles. This also means that their adoption methods are likely less technologically complex, and possibly less formal than the larger organizations.
In designing the website, I considered two major factors, based on what I’ve mentioned above: that the user wants a simple, straightforward way to adopt a pet, and that the City Humane Society wants a simple, straightforward way to add or remove information from their site. This thought led to elements such as the information “blocks” that are easily replaceable, as well as the minimalistic design, which paired well with the material design guidelines.
To decide what pages to include in the layout, I briefly surveyed some pet adoption sites around the Washington/Oregon area. The assignment didn’t allow for enough time to ask a bunch of different people about their experiences with pet adoption sites—however, I could look at a bunch of websites themselves and see if there are any trends. This includes organizations that serve Seattle http://www.seattle.gov/animalshelter, Oregon https://www.oregonhumane.org/, Southwest Washington https://southwesthumane.org/, West Columbia Gorge http://wcghs.org/, and Tacoma & Pierce County http://www.thehumanesociety.org/. These organizations all vary in size and offer a range of services beyond pet adoption. Their websites also range from the more basic to the more visually complex.
All of these websites included two main links in their navigation bars: “Adopt” and “Donate.” Other pages that were common included “About” pages, “Volunteer” or “Get Involved” pages, and links to additional resources or services that are specific to that organization. In addition to this, the “Adopt” page was almost always listed first, or sometimes second following a “Home” or “About Us” link. After much deliberation, I decided to divide the site into four major parts: Adopt, Donate, and About pages, as well as a Volunteer Page, as they seemed to be essential to the operations of a humane society or animal shelter.
In designing the app, I decided to work on the mobile side first, as many people seem to be skewing towards a mobile-first approach, and thought it would be interesting since most of the current sites I looked at are not very mobile-friendly. After sketching, I decided to remove the bubbly buttons I’d originally planned, switching them out for simple rectangular buttons, and focusing on minimalist elements such as the gray lines for dividers.
I then created the prototypes using Adobe XD. The prototypes are interactive, and the mobile and desktop versions can be found here and here, respectively.
Here are the annotated wireframes. As above, click the gallery to get a closer look:
Analysis after testing + Reflection
I conducted two usability tests, telling each participant to adopt a dog, once for the mobile layout and once for the desktop layout. Here is a short clip from one of the usability tests:
Overall, the desktop/mobile site pair worked. People who tested it said it was “consistent” and “easy to follow,” and participants had no difficulty in completing the task of adopting a dog. Additionally, participants were able to easily correct their own errors, as seen in the video above when a participant clicked something accidentally.
Perhaps the biggest takeaway from the testing sessions was that people were confused about where the process ended. After a user makes it to the Pet Profile page during the test, they instinctively tap the “Meet Me” button, as that is how you arrange a meeting with an animal at the humane society. However, that button does not lead anywhere, since in reality it would usually link to something external, like an email service. The participants were told before hand that the button would lead to external site for communication with the humane society, but it was still confusing for users to get no feedback from the site when they tapped the button. Having an additional confirmation page afterwards, or perhaps an in-site form for setting up adoption meetings may be helpful instead.
In addition, because only a portion of the site was prototyped, links that would typically be live actually lead to nowhere. One test participant misheard the testing prompt and tried to tap the “Horses” category instead of the “Dogs” category, and was confused when nothing happened.
One participant also said that the phrase “View Pet Name” wasn’t intuitive, and she said she “wasn’t sure what new information I’d be getting that I couldn’t see on that page [the Dogs page] already.” She suggested changing the link to something more intuitive, like a “learn more about…”-type prompt.
Now, I have never adopted a pet. So it follows that I’ve never really used a pet adoption website. In the future, I would like to do more research on what users expect from pet adoption sites, as well as their previous experiences adopting pets, so the design can be better informed.
Also, looking at my designs, I wonder sometimes if it is too simple– if I’ve maybe used a little too much minimalism and the site is lacking somehow. But then again, maybe it’s just the temptation to add more “cool” features just because I can, not necessarily because I should. I think more tests with the design could give me an answer there.
For A6, my group and I “created” and tested a gesture-controlled system for Netflix. The catch? There is no such system. Using Wizard of Oz Prototyping, we made it appear that a camera was picking up our test participants’ gestures—when in reality, we were behind the scenes controlling the whole thing in real time. Using this technique, we tested out the effectiveness of the gestures we made without investing in expensive, complicated tech.
These are the requirements for A6 from the spec:
Gesture recognition platform: a gestural user interface for an Apple TV or similar system that allows interaction through physical motions. An example prototype would be controlling basic video function controls (play, pause, stop, fast forward, rewind, etc.). The gestural UI can be via a 2D (tablet touch) or a 3D (camera sensor, like Kinect) system.
Your prototype should be designed to explore the following design research and usability questions:
How can the user effectively control the interface using hand gestures?
What are the most intuitive gestures for this application?
What level of accuracy is required in this gesture recognition technology?
Additionally, our group consisted of four people with the following roles:
Facilitator: someone to direct the testing, communicate with the user, and orchestrate the session.
Wizard: probably at least two people to be the wizards behind the curtain. This will depend on exactly how you are going to attempt to fool your user, but there will likely be some manipulation of your prototype that the user does not see in order to accomplish the real-time reaction to his/her actions.
Scribe: someone to capture notes on the user’s actions, what happened, how the prototype performed, etc.
Documentarian: someone to capture the user test on video.
And, as with the previous prototype posts, click any image to get a closer look.
The Initial Design
For our gesture-controlled Netflix system, we made it appear as if an external camera was picking up the participants’ motions. However, we were actually just broadcasting Netflix using an Xbox and controlling the video with hidden cameras and the Xbox controller.
In our test, what we were really seeking to understand was the gestures themselves. Criteria for success included accuracy (can the users make their intended selections?), effectiveness (how quickly are uses able to make their target selections? Are the gestures intuitive, or do they lead the user to make mistakes?), and satisfaction (do users like this method of control?)
For our test we collected mostly qualitative observational data, but also asked some interview questions at the end of the test to get the participants’ opinions. As the scribe, it was my job to take notes on the participants as they completed the test, documenting both their actions and reactions.
Each participant was given a sheet covering the basic controls. We decided that instead of walking the participants through the gestures, we would let them figure out what each gesture meant, and how big or small they thought the gestures should be. This way, we could better understand what a person’s natural tendencies are when using these gestures.
Eventually we decided on these gesture controls:
Play: pointing at the screen
Pause: Open hand, palm towards screen
Fast forward: swiping to the users right
Rewind: swiping to the users left
Stop (back to menu): closed fist
Figuring the fast forward/rewind options were a bit challenging, for several reasons. Firstly, we weren’t sure how fast or slow to skip relative to the gestures. Eventually we decided that one wave of the hand would skip one “frame” (the equivalent of flicking the controller joystick once), and continually waving and/or holding the gesture would cause the video to skip more quickly (the equivalent of holding the joystick down, either to the left or right). Another problem we encountered was which direction a person should gesture. While I and others on my team instinctively thought that moving the hand to the right should be fast forward, as if your hand is moving forward, or you are waving someone forward. However, some in our group thought that fast forward would be to the left, as if you are dragging/sliding the video playback ribbon. Eventually, however, we decided to have the fast forward be swiping to the right.
We also came up with a simple script for our facilitator to follow during the test, which is as follows:
Hi! Welcome and thank you for participating in our prototype study. We are testing a new gestural control system that uses the camera you see on the top of the television to interpret gestures to control Netflix. (Point to camera)
This is going to be a really brief test. I’m going to start by giving you a handout with the basic controls illustrated. Please review it and let me know if you have any questions. (Few seconds)
Throughout this test we would love you to think aloud, sharing your thoughts and impressions of the system and the process. I’ll have three small tasks that I ask you to do, then we’ll wrap up with a quick interview. Did you have any questions before we get started?
Task 1: Get to the middle of the video and press play. (Wait a few seconds) Okay, now pause the video.
Task 2: Oops you went a little too far! Go back a few minutes and press play.
Task 3: Okay great, now you can go ahead and stop the video.
Was using gestures comfortable? Did it feel natural?
Were you confused at any point?
Recommendations for gestures. Would you make any changes?
We ran the test itself twice—once with a pilot participant, and a second official one, which appears in the video. Both participants were young adult females with no prior knowledge of Wizard of Oz prototyping.
For the physical setup of the space, we set up in our teammate (the wizard’s) apartment, since he had the Xbox we needed for the test. We used the Xbox connected to the tv, as well as a hidden camera so the wizard could see the participant. The wizard hid in an adjacent hallway out of sight with a laptop and the Xbox controller, while the facilitator, scribe and documentarian were in the same room as the participant.
When the participants walked into the room, they were unaware of the presence of the wizard. We had Netflix setup for them, with an episode of Parks and Recreation open but paused.
Authenticity: Maintaining the Trick
In order to make the system as convincing as possible, our team added and/or tweaked a few details here and there. When we were first coming up with ideas for our prototype, we considered connecting a laptop to a television and playing Netflix from there. However, I pointed out that if we did that, the cursor and navigation bars would be visible, making the system less believable, or harder to control depending on how we wanted to design the test. I then suggested a gaming console instead, which turned out to be a good choice.
For the test sessions, I brought in my webcam, positioned it on top of the television, and made sure that the participant was aware of it. The webcam itself wasn’t plugged into anything, but appeared to be functional to the participant. I thought that by designating a (fake) sensor, the participant would be able to focus on and engage with the “sensor,” and would not suspect the Xbox.
Our group also made some decisions involving the wizard’s controls that increased the believability of the system. To ensure that our wizard relied solely on responding to a participant’s gestures, we had it set up so that he could not see the screen or hear what was happening in the room—he therefore had to rely solely on the video feed of the participant’s gestures to control the system. In addition to this, after our pilot participant, we came up with the idea that the wizard should continue to respond to a participant’s gestures even after the official test was over, when we asked the participant questions about their experience. After all, a real system would continue to run if it wasn’t turned off. By adding this detail, our system became even more convincing, especially when one of our participants tried to mimic the gestures to answer our questions.
In addition to this, the wizard had difficulty at times differentiating between a fast forward or rewind swipe, because the video he was being broadcast was mirrored, and thus backwards from the original. To ensure the accuracy of the wizard, the wizard used a map hanging on the wall behind the participant on the wall as a guide to figure out which direction he should move the joystick.
Analysis After Testing
Was the participant convinced? Yes. Our prototype allowed us to successfully evaluate our gesture system. Our non-pilot participant noted that the movements were a bit “awkward” at first, but that it got better once she got used to it. She did have issues with the fast/forward and rewind features, remarking that it “goes too fast,” and that she felt like she had to “stop and go again,” to get to the right place. Additionally, she confused the stop and pause functions, performing the pause gesture in stead of the stop gesture at one point. She also suggested that if someone wanted to increase or decrease the speed of fast forward or rewind, they could use bigger/smaller gestures to wave. This was something I also noticed—I think our group could have done a better job defining what constituted a small or big skip when fast forwarding or rewinding. These comments and observations suggest that our gestures could use some refining.
If we were to run another test, there are other things we might consider. For instance, during the tests our participants had the tendency to stand up, even though if they were using this system in real life, they’d likely be sitting. In the future, we may want to tell the participants explicitly that they should sit for the entire test. Additionally, it may be beneficial for us to add some sort of activation feature, or an on/off gesture, so that the user wouldn’t have to be conscious of accidentally triggering the gesture control.
For the video and test itself, we received feedback from the peers in our class. Our classmates liked how we continued to respond to the gesture controls even after the actual test for authenticity. They also appreciated how our video was well explained, and described the set up and test in full. However, they found the background music distracting.
As for myself, in the future I would probably rework the video to be more presentable, visually appealing, and would scratch the background music. Additionally, I would like to refine our gestures, and maybe do more research in order to increase their intuitiveness.
A5 is a video prototype/demo for an fitness or health app. I went with Headspace, an app for meditation. Specifically, this is what we were to do:
“The challenge is to create a video, maximum 60 seconds in length, that comprehensively and concisely communicates the motivation, usage, and functionality of a product or service. […] You will not be designing your own product or service, but your task is to explain to a potential customer of a health or fitness service why and (more importantly) how he or she would use this service or app.”
Having very little video experience and limited resources, I endeavored to make a video that was still quality even with these restrictions. I shot the video using a Logitech webcam, and used Adobe Premiere Pro for video editing. I also appeared in the video.
For any image, click to get a closer look.
I am somewhat of a writer, but I have never made a video such as this before. However, many of the conventions and tricks of writing and storytelling definitely apply here. One important aspect of storytelling is to make the characters want something. As the great Kurt Vonnegut once said, “Make your characters want something right away even if it’s only a glass of water.”
So what does this have to do with a video demo for an app? Well, the protagonist of the video must clearly have something they want–in this case, for a break–and this something can be given by Headspace.
The video follows a simple storyline: a stressed out student with little time to relax discovers Headspace, and is able to meditate and gain a moment of rest.
The concept for my video centered around sharp contrast: I wanted a clear difference between the protagonist with and without the app, between the stressed moments and the meditation moments. To illustrate that, I used two major mechanisms: sound and text.
From a sound perspective, I wanted to use a song for the “stressed out” portion of the video, and have it contrasted with a quieter song in the meditation portion (although in the end, I opted to use silence instead.) The song I chose was Float On by Modest Mouse, which has not only has lyrics that match the message of the video, but also has a prominent melody that is fairly distinct from the ensuing silence. I’m also pretty fond of this song, so of course I was pleased that I got to use it.
For text, I wanted lots of words, moving quickly, to try and illustrate a stressed stream of conscience, which would then disappear during the meditation. Originally I only planned to have text in the beginning, but I decided to add text throughout the video to make it more consistent, and also to make it clearer that the text are representative of a person’s continuous thoughts.
I also used the throwing of my backpack to visually represent the stress of the student.
My initial storyboards were more or less similar to the final video product, though there were minor changes/adjustments, such as flipping the direction the first shot, since I filmed in my room and it made more sense to throw my backpack the other way, or the music/text alterations I mentioned earlier. Overall I think I am more of a planner when I write or make stories, and I think that applies here, although I also think there is great value in being flexible and making changes as needed.
Analysis after testing + Reflection
Peer feedback on my video was fairly positive. Those who had viewed the video voiced that they liked the pacing and the music cuts, particularly for the end title. The captions were also very effective– my peers noted that captions represented inner thoughts well and that their positioning on the screen was also good. However, one person did express the wish for calmer music at the end, after the meditation, so as not to interrupt the sense of calm already established.
First of all, if I were to do this again or make any other videos, I’d probably get permission/the rights to use the music (which is copyrighted) or to use a song in the public domain. This video was only for assignment purposes, and with one week to complete there was hardly any time to worry about copyright. However, I think that following these kinds of rules is important, and respectful of an artist’s work.
While I do like the story I presented, if I had more time/resources, I would probably try to include more scenes visually representing the stress, instead of just using words. I also think that, if I were to have a longer video, it would be beneficial to show more features of the app.
In addition to that, I’d definitely want to be more professional in how the video was put together, or to get someone with those kinds of skills. Better editing, acting, lighting, less distracting backgrounds, etc., all things that would require more resources that I didn’t have access to this time around. Otherwise, next time I would probably try to make the musical timing more exact, and maybe make the font color consistent throughout the video.
Assignment A3 is a 3D printed object. The specifications for this assignment weren’t very specific, only that it had to be a tool or “something that would be useful in your everyday life.” Beyond that, we were also required to use the following primitive operations in Rhinoceros:
Boolean (adding or subtracting one object from another)
Additionally, we were supposed to be mindful of time to print and filament use, but nothing specific. For my tool, I decided to make an orange peeler.
For any image, click to get a closer look.
One of the biggest challenges of this orange peeler was figuring out the size and curvature. Without any research on how people usually peel oranges, I was left to my own devices to figure out what would get the job done.
Though I did a few calculations based on averages I found on the internet, most of my numbers came from an actual orange I bought from the grocery store. The navel orange I purchased had a circumference of 12.5″, diameter of 3.98″, and a peel thickness of 0.25″ and 0.375″ when I measured in two different places. The orange also had a weight of 0.96 lbs. I used these numbers to calculate the curvature of the peeler.
Unlike A3, the math was a bit more involved for this project, so I used an online arc length calculator. My measurements can be seen in the picture of my final measurements above, but for reference: the peeler has an overall width of 1.75″ (chord length), and is constructed from a 60-degree wedge of a circle with radius 1.75″. I decided to use a radius of 1.75″, considering that a peeler with a larger radius would fit a smaller orange, but not the other way around.
For the ring, I looked up standard ring sizes using this website. The average ring size for women is a 7, average for men is a 10. But the ring doesn’t (and shouldn’t) have to fit the finger perfectly, so I used the largest ring size listed, which is a size 14.5 (23.6 mm diameter).
I then used these numbers to create an object in Rhinoceros, and moved the blade to the middle of the peeler.
For printing, I used fewer polygons for a smoother circle. The printing process took about two hours. Afterwards I used precision knife to shave off any extra printed material around the edges, resulting in the orange peeler you see above.
Analysis after testing + Reflection
Does it work? Yes. The orange peeler successfully peeled the orange. However, the blade was a little long and cut into the orange. Though I did measure the blade, it occurred to me that the blade did not have to pierce the entirety of the peel– just enough to get a good cut in. If I were to remake this orange peeler, I’d probably do some more research on what the ideal length for the blade would be.
I also got feedback from my peers on this orange peeler, though it was somewhat… interesting. Some of the comments I got were very helpful, and would definitely guide my revisions if I were to do any. Some thought it had good ergonomics and that it was a “cool concept” and was “smart,” but also thought that the peeler itself could be extruded less/could be thinner. On the other hand, some people did not seem to grasp the idea of an orange peeler in the way I thought they would. Though I did tell them it was an orange peeler, and the design is based off of orange peelers that are currently commercially available, many people were confused about how to use it. Some commented that the blade wasn’t sharp enough (even though the blade doesn’t have to be sharp at all), that the ring didn’t fit snug against their fingers, or that they weren’t sure which finger to put through the ring.
This brings me to an interesting point. I am of of the opinion that a well-designed gadget of this type should need little explanation to be used. If it is not intuitive, it likely needs revising. However, as I mentioned before, orange peelers that are similar in shape to mine already exist on the market, and have proven to be effective tools. Furthermore, my peers did not actually test the peeler– they touched it and looked at it, but there were no oranges present. It may be that my peers needed to try to use the peeler on an actual orange to discover for themselves if it is a good design or not.
So, in considering these last comments, can I a say that my design was successful? On one hand yes, since it works, and is comparable to its market counterparts. On the other hand, however, can I call the design successful if new users have difficulty with it, even if it is already an established product? I can’t say I have a definite answer to that question, though I will say that if I were to make revisions, I would definitely try to improve the intuitiveness of the design.
Assignment A3 is a laser cut object. The limitations on our object are as follows:
must be cut from a single sheet of 18″ x 24″ chipboard (which my class provided)
must not use any glue, tape, or other fastening materials to assemble and use
must be able to be dissembled into pieces that can be stored flat and transported (as in a backpack)
We had the choice between a couple of things (a laptop stand, a phone stand for shooting videos, etc.), but in the end I went with a tablet stand. I called it the “Butterfly” design, since one of my peers told me it looked like a butterfly.
(If you would like to see larger versions of any of the images, just click the image.)
When I first started brainstorming for a design, most of the ideas I had were complicated and had too many parts. I was also trying to incorporate odd shapes– half-circles, even half-hexagons (more commonly known as trapezoids). But after a while of this I realized something crucial: I need to stick to my core principles.
And one of my core principles is that, many times, simpler is better.
In designing my tablet stand, I wanted two things:
to increase the stability of my design (especially when considering the flimsiness of the material being used), and
to limit the number of pieces used in the design for easy assembly and transportation
This lead me to the butterfly design. In the case of my tablet stand, form follows function.
Composed of two roughly triangular pieces with feet and interlocking slits, the tablet stand is easy to assemble and disassemble. The cross in the middle of the stand also helps support the tablet. In the end, I only used up less than half of the 18″ x 24″ chipboard rectangle we were given.
In creating the tablet stand, I made multiple size calculations before mapping out my design digitally (as you can see from my sketches). Math is one of my specialties, so it was very enjoyable. I calculated he relative size of each triangle using the measurements of my friend’s iPad, and based the viewing angle on how I usually tilt my screens when I’m sitting at a table.
Afterwards, digital renderings were created using a combination of Adobe Illustrator and Rhinoceros.
I also added a hexagon logo to each piece, just for fun.
V1: Cereal Box
Before laser cutting the chip board, I first tested out my design using a cereal box. The cereal box version was able to support the weight of the tablet on its own, although a bit shakily, since the cereal box cardboard tended to fall flat if the slits weren’t aligned properly, due to it being very thin material. However, seeing that the design was successful, I went ahead and laser cut the chipboard.
I ended up adjusting the thickness of the slit, since my chipboard’s thickness fell between 0.06″ and 0.065″ when I took measurements from multiple spots. In the end I went with a slit thickness of 0.065″, resulting in the table stand in the pictures.
Analysis after testing
Did it work? Yes. The stand was able to support a tablet, and achieved my initial goals of providing stability and minimizing the amount of pieces needed.
Feedback on my design from my peers was very positive. Many people liked the simple design, and appreciated its sturdiness. At one point, I did wonder if the design was too simple. But in the end, I still stand by simplicity as one of my core principles.
However, if I were to do this again, I might make a few changes, based on my actual laser cut prototype. I think that the slits could have been made a bit thinner. I went with the outside estimate (0.065″) to be safe, and it held up, but the joined pieces were a bit more loose than I would’ve liked. It is fine the way it is, but a tighter lock would give me peace of mind. Then again, if the interlocking slits were made too tight, a user might have difficulty assembling the pieces, and it could result in wear along the slit line, making the design more fragile. In the future, I could play around with the thickness of the slit, something I could not test by using cereal box material.
Another change I might make would be to make the curved feet just a bit larger/deeper, so that the tablet can be held even more securely, even with a case attached.
Also, I got a comment from one of my friends suggesting that the viewing angle should be steeper. While the angle is suited to my own preference, it would be interesting to do more research on what the ideal viewing angle for a tablet would be, barring the addition of any adjustable elements.
Assignment A2 is a model prototype for “a shower control interface for a high-end, multi-feature valve and temperature control.” Desired features are as follows:
product controls and interface/display must fit within the dimensions of approximately 4 x 4 x 2 in volume
product weight is approximately .75 pounds, and should be able to be mounted on a wall
digital display will show settings such as temperature, water flow volume, valves (this could be used to control whether water comes out of the tub spout, the shower head, a handheld wand)
Physical affordances and controls must be easy to use when visibility and dexterity are challenged by soapy hands, steamy showers, and absence of corrective lenses
We were also asked to think of the brand OXO as an inspiration, and create something “well-designed, comfortable and easy to use.”
For this design, I started off thinking that I wanted the shower control to be round and slightly raised, something similar in shape to the shield that Captain America uses. I figured that a smooth, round device would be safer in a shower, and that if someone fell they would be hurt less if the interface didn’t have any corners.
The shower control has three main buttons to switch between settings (water volume control, temperature control, and valve control) and a slider that slides around the entire circumference in order to adjust these settings.
For many devices that have a similar shape, such as the Nest, adjustments are made by turning the entire outer portion of the circle. In my original design, I decided against this because I thought that it might be more difficult for someone to turn a large dial like that in the shower, when things are slipperier. I imagined that the slider could have some sort of grip, or have indented sides so that it would be easier to grab. Additionally, someone could also push the slider with their finger, making it easy to use.
The shower control display would show the three settings, with the current setting that can be adjusted being presented as the largest. Pressing a different button makes that setting the current setting, makes it the largest item presented on the display, and also allows it to be changed by using the slider. The large display items/buttons help with visibility in the shower.
The model prototype was constructed using Styrofoam, black masking tape, magnets, paper, sponges, markers, and some clay.
One of the biggest challenges in creating this prototype was getting the shape right. I started by using a Styrofoam half-sphere I bought at the craft store, and shaved it down. In the end it was not as flat as a Captain America shield, but I was pleased with the overall shape, and smoothness. The buttons were made using some sponge, so that the buttons would feel more button-like to a test participant.
The slider was perhaps the most challenging aspect of this prototype, as I could not construct an actual slider to go around a Styrofoam dome. In the first iteration of the prototype, I used thin weak magnets around the outer rim of the dome, and also attached a magnet to the slider I made out of clay. This did not work out so well, and the slider kept falling off. In addition, the sliding motion was not as smooth as I would have liked. I also tried attaching the slider with string, but that did not help very much.
In the end, I used a strong magnet on the slider piece and covered the magnets with tape to help imitate a smooth sliding motion.
Analysis after testing
After the initial test, several issues with the prototype became apparent, which caused me to make some changes.
The first thing the participant noted was that they could not tell which way to push the slider in order to increase or decrease a setting. Apparently this function differs from shower to shower. However, it seems that most showers have “hotter” as counter-clockwise, which is opposite of what I had originally thought. To fix this, I placed a red arrow indicating the direction of increasing temperature/volume.
In addition, the first prototype was not well built– buttons fell off during testing, and the motion of the slider was not smooth. These were fixed with the second iteration of the prototype.
One thing that came up during a class critique (after the second prototype) was the lack of an “on” button, which I would probably add to subsequent prototypes. However, there was some debate about whether or not the volume control could also function as an “on” button. To turn off the shower, one would decrease the volume of water to 0.