Presentation is loading. Please wait.

Presentation is loading. Please wait.

TransTracker Pilot Usability Study Team: Drew Bregel - Development, Data Analysis Marianne Goldin - PM, UI Tester, Data Gathering, Presenter Joel Shapiro.

Similar presentations


Presentation on theme: "TransTracker Pilot Usability Study Team: Drew Bregel - Development, Data Analysis Marianne Goldin - PM, UI Tester, Data Gathering, Presenter Joel Shapiro."— Presentation transcript:

1 TransTracker Pilot Usability Study Team: Drew Bregel - Development, Data Analysis Marianne Goldin - PM, UI Tester, Data Gathering, Presenter Joel Shapiro - Tasks, Interactions Joe Woo - Development, UI Tester

2 Our Tool A mobile phone application that goes above and beyond Google Maps Build on Metro TripPlanner, OneBusAway, Google Maps transit trip planning A more visual/map-heavy interface (less text input), shows the user the context of their environment Importantly: the app predicts what the user will do next, and knows what the user does frequently!

3 Introduction to the Experiment 1. After redesigning our program since the first usability test, we wanted to see how users interacted with TransTracker and expected it to function. 2. We video-recorded the tests with the users and collected data based on their performance and opinions on the program and how they interacted with it.

4 Method 5 Volunteer Participants Application running on an emulator on a PC Laptop Private room in the CSE Basement 2 testers: One to conduct the test, One to record the test, bring in the volunteers, and troubleshoot the app Payment in form of a snack (pop, chips)

5 Test Procedure 1.Recruit volunteer 2.Consent form 3.Introduction to our application (avoid key-words used in UI, explain “think out loud”) 4.Participant reads task scenario out loud and does the task 5.Time each task (each user must complete their tasks) 6.Videotape each task 7.Demographic questions 8.Debriefing comments 9.Compensation: a snack from the student lounge

6 Participants - Demographics 5 participants –Median age = 22 –3 males, 2 females –All upperclassmen, CSE majors 3 of 5 used mobile applications All used web-based transit applications –Note: The 2 who did not use mobile apps, used web-based apps the same or more (our potential customers!) All used the bus at least once a week (average= 6.8 times/week)

7 Test Measures Dependent Variables (for each task) –Time to complete incident –# Errors –# Times user goes to wrong screen or scrolls to wrong area Critical Incident Logs (for each task) –Both positive and negative –Transcribed from video

8 Tasks We defined a list of task scenarios that would make use of the principal features of our programs and gave users context for the tasks.

9 Task 1: Predicted Location (Easy) Using our app’s predictive feature, take a trip to your most frequent destination (“Home”) The user can assume that one of the predicted routes is accurate since they have taken it before

10 Task 1 Screens

11 Task 1 Screens: Detail

12 Task 1: Looking for… If user tried to select any tabs that were not needed to complete the task. If the user attempted to scroll down pages when unnecessary. Are users unclear on where click to take a trip? Are users aware that they can click on a destination to take a trip? Is it obvious that the Home screen is the default screen in the program?

13 Task 2: Saved Location (Medium) Find a trip you’ve taken before, and take it now The user can assume that the destination that they would like to go to is listed in the “Saved” page

14 Task 2 Screens

15 Task 2: Looking for… Does the user [mistakenly] select the New tab for this task? Does the user understand the concept of Saved trips? Does the user understand that they can re- take a trip that they’ve taken? Does the user understand expect that the destinations and trip times are clickable?

16 Task 3: New Destination (Hard) User must enter anew trip into the program through the “New” page. The user is to make their new trip by searching for it, then taking the trip. The user has not taken the trip before

17 Task 3 Screens

18 Task 3: Looking for… Does the user understand the idea of a New trip? Does the user understand how to select a destination from the search results? Does the user understand what the map represents? Does the user expect that the destinations and trip times are clickable?

19 Study Results Collected both in real-time during the test… –Task time –# Errors –# Wrong Screens And retroactively, through reviewing video of the tests –Critical results –Verification of # Errors / # Wrong Screens –Notes and chatter

20 Completion time

21 # Errors

22 # Times on an Irrelevant Screen

23 Critical Incident Count

24

25 Sample Video of Task 3 http://www.youtube.com/watch?v=szcLDylxhtM

26 Major Themes in Negative Critical Incidents Task 1: List comprehension Map interactions Task completion Task 2: Prototype fidelity Task 3: Wizard of Oz List comprehension Task completion

27 Task 1: List Comprehension Affordances of the items on the list in the home screen Context of the list - why is it important?

28 Task 2: Prototype Fidelity Not all items that should be clickable, are actually clickable

29 Task 3: Wizard of Oz Lots of false positives--it would be great if our app could read the user’s mind

30 Tasks 1-3: Task Completion Since users did not understand the spirit of the application, they are not sure when they’ve gotten all the info they can Users not sure what the “end point” of the task is

31 Recommendations for Design Changes Differentiate between clickable and non-clickable items Rename tabs Replace scroll option with a screen expansion option Clarify how the predicted locations on the Home screen are sorted

32 Recommendations for Design Changes (continued) Tighter integration with Google Maps cues Destinations and search results need more detailed/contextural info Add clear exits to all pages that lack them Increase use of symbols and images through the programs Make current time visible and obvious

33 Summary The questions that we asked our users they gave us a great set of data; almost exactly the kind of results that we were hoping for Our script accurately reflected the tasks that we wanted our users to accomplish and led them on the paths that we wanted to find UX errors that we wanted to correct Our program uses a small enough set of pages that it was easy to see overarching issues in the program


Download ppt "TransTracker Pilot Usability Study Team: Drew Bregel - Development, Data Analysis Marianne Goldin - PM, UI Tester, Data Gathering, Presenter Joel Shapiro."

Similar presentations


Ads by Google