Flow No self-monitoring Strive for suspension of disbelief
How do we suspend disbelief? In what ways is seamlessness fragile?
Respond to every contact – why? Respond immediately – why? Make every transition fluid
Make feedback whimsical, magical and either expected or/and informative. Example? Create transition animations that communicate state and relationship changes. Mimic the real world by using notions such as mass, acceleration, etc. Make sure controls to start/end/major state change are always visible. Break from real-world behavior to match user intent.
Pushes beyond what is physically natural, to do more than is possible in real world What’s an example on the iPhone? An example from urban planning? Your own example?
Create immediate responses to all user input that will receive a response Enable single-finger drag and flick movements on moveable content Enable inertia Do not use time-based gestures on content Enable users to manipulate content directly
Begin with familiar environment and behaviors Enable quick discovery of delightful interactions Consistently use transitions Ensure each state change is clearly in response to user actions Do not innovate for the sake of novelty Always show signs of life Make sure behaviors are subtle rather than annoying or distracting
Instructional scaffolding is the provision of sufficient support to promote learning when concepts and skills are being first introduced to students. How does this apply to UID? What are some examples from the past? Is reference documentation a form of scaffolding? What about video instruction?
All likely actions should lead to prompting for next step or foreshadowing of state When appropriate, guide users to unseen content or functionality. How? Require explicit input for destructive functions or disorienting changes. (ejector seats) Foreshadow results so users can reverse actions
Reduce the number of features (judiciously add as needed) Make sure features are focused on particular task/goal Make sure essential features are immediately discoverable Use consistent interaction metaphors Hint at deeper possibilities Make sure visual indications of touch are accurate Put users in control
Clarify errors so users can distinguish between hardware errors (no touch detected), state errors (touched item is not in state to respond) and semantic errors (response to touch is not what user expected)
Most input devices can be thought of as fitting into a rather small number of categories How do we model direct-touch interactions? How does this compare to the model for a one- button mouse? What types of features respond to tracking state of a mouse?
What are our options, since direct touch does not have a tracking state?
What happens when users touch an object and nothing happens? Compare direct touch to mouse interactions – what feedback do users get with mice that they don’t get with direct touch? Should touch apps have a mouse cursor? What is the feedback ambiguity problem? (and what is post hoc ergo proctor hoc)
Why are in-air gesture systems more challenging even than touch? What are reserved actions? What’s a pigtail gesture? What are Hover Widgets? What is a reserved clutch? What are the advantages/disadvantages/challenges?
What is multi-modal input on the iPhone? How could multi-modal include speech? Describe a multi-modal example in traditional GUIs.
With a partner/group, think of an application that would make sense with in-air gestures. Choose from reserved actions, reserved clutch, or multi-modal input. Be ready to describe how your system would work.
What are mechanics? What are dynamics? What are aesthetics? What are primary and secondary objects?
For the Chess game, what are some ways that the software can help – learn to use the program, learn to play chess (mechanics), improve skill at chess. What guidelines does the author provide to help users develop expertise? What is a “turing test” in general? For NUI?
With a partner/small group, think about games you’ve played. What’s the best technique you’ve seen for helping beginners learn the mechanics of the game? What techniques have you seen for helping users move to more advanced skill levels?
Think of 4 levels of actions: 1. What is physically possible with the device 2. What is actually recognized and conveyed by the device 3. Those things to which system responses are tied 4. Expand primitives into controls
What is the mouse good at? What are false positives and false negatives? What is the mouse bad at? How do designs take this into account? What is a pen bad at? What is a pen good at?
What are some of the considerations related to number of primitives?
We’re designing a game of Clue with a NUI interface. What’s a gesture for: Roll the die Move the correct number of steps Make a suggestion Take a secret passageway
What are the 3 stages of a gesture? How does this relate to function calls? What issue can arise if there’s no continuation phase? Cite an example. How does this relate to having two steps in the recognition phase? How does ambiguity affect feedback? How can the ambiguity problem be solved? How is iPhone delete better than “flick” option?
How do Windows users learn/remember the Control language? What is the gulf of competence? How are the Alt hotkeys different from Control? How can pen-based systems guide users to learn the language? What’s different about a gesture language? What is “just-in-time chrome”?
Think of an app that you would write using gestures First list the actions a user might take Then think of the gestures they would use Finally brainstorm how you would make this self revealing. Answers will be shared with the class.