Models of Human Performance CSCI 4800 Spring 2006 Kraemer.

Slides:



Advertisements
Similar presentations
Structured Design The Structured Design Approach (also called Layered Approach) focuses on the conceptual and physical level. As discussed earlier: Conceptual.
Advertisements

ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Chapter 12 cognitive models.
Chapter 11 user support. Issues –different types of support at different times –implementation and presentation both important –all need careful design.
User Modeling CIS 376 Bruce R. Maxim UM-Dearborn.
Chapter Five The Cognitive Approach II: Memory, Imagery, and Problem Solving.
Describing Process Specifications and Structured Decisions Systems Analysis and Design, 7e Kendall & Kendall 9 © 2008 Pearson Prentice Hall.
Z goal and task hierarchies z linguistic z physical and device z architectural Cognitive models.
Cognitive modelling, Users models and Mental models What’s cognitive modelling ? The human information processing approach Cognitive Models of Users in.
Help and Documentation zUser support issues ydifferent types of support at different times yimplementation and presentation both important yall need careful.
Spring 2006Human Performance 1H2 Revision Dr. C. Baber 1 Human Performance: Revision 1H2 Chris Baber.
Spring 2007Human Performance 1H2 Dr. C. Baber 1 Human Performance 1H2 Chris Baber.
Software Requirements
Help and Documentation CSCI324, IACT403, IACT 931, MCS9324 Human Computer Interfaces.
Models of Human Performance Dr. Chris Baber. 2 Objectives Introduce theory-based models for predicting human performance Introduce competence-based models.
Meaningful Learning in an Information Age
Chapter 1 Program Design
Predictive Evaluation Predicting performance. Predictive Models Translate empirical evidence into theories and models that can influence design. Performance.
Chapter Seven Advanced Shell Programming. 2 Lesson A Developing a Fully Featured Program.
Chapter 5 Models and theories 1. Cognitive modeling If we can build a model of how a user works, then we can predict how s/he will interact with the interface.
Modeling Driver Behavior in a Cognitive Architecture
General Knowledge Dr. Claudia J. Stanny EXP 4507 Memory & Cognition Spring 2009.
SLB /04/07 Thinking and Communicating “The Spiritual Life is Thinking!” (R.B. Thieme, Jr.)
UNDERSTANDING USERS: MODELING TASKS AND LOW- LEVEL INTERACTION Human-Computer Interaction
 Knowledge Acquisition  Machine Learning. The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
© 2009 McGraw-Hill Higher Education. All rights reserved. CHAPTER 8 The Information-Processing Approach.
Gary MarsdenSlide 1University of Cape Town Human-Computer Interaction - 6 User Models Gary Marsden ( ) July 2002.
CHAPTER TEN AUTHORING.
What is “Thinking”? Forming ideas Drawing conclusions Expressing thoughts Comprehending the thoughts of others Where does it occur? Distributed throughout.
GOMs and Action Analysis and more. 1.GOMS 2.Action Analysis.
1 Introduction to Software Engineering Lecture 1.
© 2008 The McGraw-Hill Companies, Inc. Chapter 8: Cognition and Language.
Chapter Five The Cognitive Approach II: Memory, Imagery, and Problem Solving.
CHAPTER FIVE The Cognitive Approach II: Memory, Imagery, and Problem Solving.
Brunning – Chapter 2 Sensory, Short Term and Working Memory.
Understanding Users Cognition & Cognitive Frameworks
Cognitive Modeling 1 Predicting thougts and actions
Chap#11 What is User Support?
Task Analysis CSCI 4800/6800 Feb 27, Goals of task analysis Elicit descriptions of what people do Represent those descriptions Predict difficulties,
Task analysis Chapter 5. By the end of this chapter you should be able to... Describe HTA and its features Explain the purpose of task analysis and modelling.
Cognitive Theories of Learning Dr. K. A. Korb University of Jos.
Modeling Visual Search Time for Soft Keyboards Lecture #14.
Introduction to Neural Networks and Example Applications in HCI Nick Gentile.
Theories of Learning: Cognitive Theories Dr. K. A. Korb University of Jos 15 May 2009.
Copyright © 2010, Pearson Education Inc., All rights reserved.  Prepared by Katherine E. L. Norris, Ed.D.  West Chester University of Pennsylvania This.
© Simeon Keates 2009 Usability with Project Lecture 14 – 30/10/09 Dr. Simeon Keates.
ITM 734 Introduction to Human Factors in Information Systems
Human Abilities 2 How do people think? 1. Agenda Memory Cognitive Processes – Implications Recap 2.
Cognitive Science and Biomedical Informatics Department of Computer Sciences ALMAAREFA COLLEGES.
1 Cognitive Modeling GOMS, Keystroke Model Getting some details right!
CHAPTER 6 COGNITIVE PERSPECTIVES: 1. THE PROCESSING OF INFORMATION EPSY DR. SANDRA RODRIGUEZ PRESENTATION BY: RUTH GARZA Gredler, M. E. (2009).
CognitiveViews of Learning Chapter 7. Overview n n The Cognitive Perspective n n Information Processing n n Metacognition n n Becoming Knowledgeable.
A disciplined approach to analyzing malfunctions –Provides feedback into the redesign process 1.Play protocol, searching for malfunctions 2.Answer four.
Interaction Frameworks COMPSCI 345 S1 C and SoftEng 350 S1 C Lecture 3 Chapter (Heim)
Artificial Intelligence Knowledge Representation.
Copyright 2006 John Wiley & Sons, Inc Chapter 5 – Cognitive Engineering HCI: Developing Effective Organizational Information Systems Dov Te’eni Jane Carey.
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
What is cognitive psychology?
Human Computer Interaction Lecture 23 Cognitive Models
Task Analysis CSCI 4800/6800 Feb 27, 2003.
Learning Fast and Slow John E. Laird
CIS 376 Bruce R. Maxim UM-Dearborn
Chapter 7 Psychology: Memory.
GOMS as a Simulation of Cognition
Chapter 11 user support.
Cognitive models linguistic physical and device architectural
Chapter 12 cognitive models.
Human Computer Interaction Lecture 24 Cognitive Models
Chapter 12 cognitive models.
Presentation transcript:

Models of Human Performance CSCI 4800 Spring 2006 Kraemer

Objectives  Introduce theory-based models for predicting human performance  Introduce competence-based models for assessing cognitive activity  Relate modelling to interactive systems design and evaluation

What are we trying to model?

Seven Stage Action Model [Norman, 1990] Form intention Develop plan Perform action Object in world Evaluate against goal Interpret object Perceive state of object GOAL OF PERSON

Describing Problem Solving  Initial State  Goal State  All possible intervening states –Problem Space  Path Constraints  State Action Tree  Means-ends analysis

Problem Solving  A problem is something that doesn’t solve easily  A problem doesn’t solve easily because: – you don’t have the necessary knowledge or, – you have misrepresented part of the problem  If at first you don’t succeed, try something else  Tackle one part of the problem and other parts may fall into place

Conclusion  More than one solution  Solution limited by boundary conditions  Representation affects strategy  Active involvement and testing

Functional Fixedness  Strategy developed in one version of the problem  Strategy might be inefficient X ) XXXX  Convert numerals or just ‘see’ 4

Data-driven perception Activation of neural structures of sensory system by pattern of stimulation from environment

Theory-driven perception Perception driven by memories and expectations about incoming information.

KEYPOINT PERCEPTION involves a set of active processes that impose: STRUCTURE,STABILITY, and MEANING on the world

Visual Illusions Old Woman or Young girl? Rabbit or duck?

Interpretation Knowledge of what you are “looking at” can aid in interpretation JACKAN DJI LLW ENTU PTH EHILLTOFE TCHAPAILO FWATER Organisation of information is also useful

Story Grammars  Analogy with sentence grammars –Building blocks and rules for combining  Break story into propositions “Margie was holding tightly to the string of her beautiful new balloon. Suddenly a gust of wind caught it, and carried it into a tree. It hit a branch, and burst. Margie cried and cried.” “Margie was holding tightly to the string of her beautiful new balloon. Suddenly a gust of wind caught it, and carried it into a tree. It hit a branch, and burst. Margie cried and cried.”

Story Grammar Story Setting Episode Event Reaction Internal response Overt response Change Of state Event [sadness] [1] [2] [3] [4] [5] [6]

Inferences  Comprehension typically requires our active involvement in order to supply information that is not explicit in the text 1. Mary heard the ice-cream van coming 2. She remembered her pocket money 3. She rushed into the house.

Inference and Recall  Thorndyke (1976): recall of sentences from ‘Mary’ story –85% correct sentence –58% correct inference –  sentence not presented –6% incorrect inference

Mental Models  Van Dijk and Kintsch (1983) –Text processed to extract propositions, which are held in working memory; –When sufficient propositions in WM, then linking performed; –Relevance of propositions to linking proportional to recall; –Linking reveals ‘gist’

Semantic Networks ANIMAL Has Skin Can move Eats Breathes BIRD Can fly Has Wings Has feathers FISH Has fins Can swim Has gills CANARY Is Yellow Can sing Collins & Quillian, 1969

Levels and Reaction time A canary is a canary A canary is a bird A canary is an animal A canary is a fish A canary can sing A canary can fly A canary has skin A canary has gills Collins & Quillian, False Levels of Sentences Mean Reaction Time (s) Property Category

Canaries  Different times to verify the statements: –A canary is a bird –A canary can fly –A canary can sing  Time proportional to movement through network

Scripts, Schema and Frames  Schema = chunks of knowledge –Slots for information: fixed, default, optional  Scripts = action sequences –Generalised event schema (Nelson, 1986)  Frames = knowledge about the properties of things

Mental Models  Partial  Procedures, Functions or System?  Memory or Reconstruction?

Concepts  How do you know a chair is a chair? A chair has four legs…does it? A chair has a seat…does it?

Prototypes, Typical Features, and Exemplars  Prototype  ROSCH (1973): people do not use feature sets, but imagine a PROTOTYPE for an object  Typical Features  ROSCH & MERVIS (1975): people use a list of features, weighted in terms of CUE VALIDITY  Exemplars  SMITH & MEDIN (1981): people use an EXAMPLE to imagine an object

Representing Concepts  BARSALOU (1983) –TAXONOMIC  Categories that are well known and can be recalled consistently and reliably –E.g., Fruit, Furniture, Animals  Used to generate overall representation of the world –AD HOC  Categories that are invented for specific purpose –E.g., How to make friends, Moving house  Used for goal-directed activity within specific event frames

Long Term Memory  Procedural –Knowing how  Declarative –Knowing that  Episodic vs. Semantic –Personal events –Language and knowledge of world

Working Memory  Limited Capacity  items (Miller, 1965)  chunks (Broadbent, 1972)  Modality dependent capacity  Strategies for coping with limitation  Chunking  Interference  Activation of Long-term memory

Central executive Articulatory control process Auditory word presentation Visual word presentation Phonologica l store Visual Cache Inner scribe Baddeley’s (1986) Model of Working Memory

Slave Systems  Articulatory loop –Memory Activation –Rehearsal capacity  Word length effect and Rehearsal speed  Visual cache –Visual patterns –Complexity of pattern, number of elements etc  Inner scribe –Sequences of movement –Complexity of movement

Typing  Eye-hand span related to expertise  Expert = 9, novice = 1  Inter-key interval  Expert = 100ms  Strategy  Hunt & Peck vs. Touch typing  Keystroke  Novice = highly variable keystroke time  Novice = very slow on ‘unusual’ letters, e.g., X or Z

Salthouse (1986)  Input –Text converted to chunks  Parsing –Chunks decomposed to strings  Translation –Strings into characters and linked to movements  Execution –Key pressed

Rumelhart & Norman (1982)  Perceptual processes –Perceive text, generate word schema  Parsing –Compute codes for each letter  Keypress schemata –Activate schema for letter-keypress  Response activation –Press defined key through activation of appropriate hand / finger

Schematic of Rumelhart and Norman’s connectionist model of typing middle ring index little thumb Left hand middle index ring thumb little Right hand Response system activation j a zz jazz Word node, activated from Visual or auditory stimulus Keypress node, breaking Word into typed letters; Excites and inhibits nodes

Automaticity  Norman and Shallice (1980)  Fully automatic processing controlled by SCHEMATA  Partially automatic processing controlled by either Contention Scheduling  Supervisory Attentional System (SAS)

Supervisory Attentional System Model Perceptual System Supervisory Attentional System Effector System Contention scheduling Trigger database Control schema

Contention Scheduling  Gear changing when driving involves many routine activities but is performed ‘automatically’ – without conscious awareness  When routines clash, relative importance is used to determine which to perform – Contention Scheduling  e.g., right foot on brake or clutch

SAS activation  Driving on roundabouts in France –Inhibit ‘look right’; Activate ‘look left’ –SAS to over-ride habitual actions  SAS active when:  Danger, Choice of response, Novelty etc.

Attentional Slips and Lapses  Habitual actions become automatic  SAS inhibits habit  Perserveration  When SAS does not inhibit and habit proceeds  Distraction  Irrelevant objects attract attention  Utilisation behaviour: patients with frontal lobe damage will reach for object close to hand even when told not to

Performance Operating Characteristics  Resource-dependent trade-off between performance levels on two tasks  Task A and Task B performed several times, with instructions to allocate more effort to one task or the other

Task Difficulty  Data limited processes  Performance related to quality of data and will not improve with more resource  Resource limited processes  Performance related to amount of resource invested in task and will improve with more resource

POC  Data limited  Resource limited Cost Task A Task B P M Task A Task B P M Cost

Why Model Performance?  Building models can help develop theory –Models make assumptions explicit –Models force explanation  Surrogate user: –Define ‘benchmarks’ –Evaluate conceptual designs –Make design assumptions explicit  Rationale for design decisions

Why Model Performance?  Human-computer interaction as Applied Science –Theory from cognitive sciences used as basis for design –General principles of perceptual, motor and cognitive activity –Development and testing of theory through models

Types of Model in HCI SystemProgramUserResearcherDesigner ProgramX UserXX Researche r XXXX DesignerXXX Whitefield, 1987

Task Models  Researcher’s Model of User, in terms of tasks  Describe typical activities  Reduce activities to generic sequences  Provide basis for design

Pros and Cons of Modelling  PROS –Consistent description through (semi) formal representations –Set of ‘typical’ examples –Allows prediction / description of performance  CONS –Selective (some things don’t fit into models) –Assumption of invariability –Misses creative, flexible, non-standard activity

Generic Model Process?  Define system: {goals, activity, tasks, entities, parameters}  Abstract to semantic level  Define syntax / representation  Define interaction  Check for consistency and completeness  Predict / describe performance  Evaluate results  Modify model

Device and Task Models

Device Models  Buxton’s 3-state device model State 0 State 1 State 2

Application State 0 State 1 State 2 Out of range Pen on Pen off Button up Button down select drag

Different pointing devices DeviceState0State1State2 TouchscreenX PenXXX JoystickXX MouseXX

Conclusions  Models abstract aspects of interaction –User, task, system  Models play a variety of roles in design

Hierarchical Task Analysis  Activity assumed to consist of TASKS performed in pursuit of GOALS  Goals can be broken into SUBGOALS, which can be broken into tasks  Hierarchy (Tree) description

Hierarchical Task Description

The “Analysis” comes from plans  PLANS = conditions for combining tasks  Fixed Sequence –P0: 1 > 2 > exit  Contingent Fixed Sequence –P1: 1 > when state X achieved > 2 > exit –P1.1: 1.1 > 1.2 > wait for X time > 1.3 > exit  Decision –P2: 1 > 2 > If condition X then 3, elseif condition Y then 4 > 5 > exit

Reporting  HTA can be constructed using Post-it notes on a large space (this makes it easy to edit and also encourages participation)  HTA can be difficult to present in a succinct printed form (it might be useful to take a photograph of the Post-it notes)  Typically a Tabular format is used: Task number TaskPlanComments

Redesigning the Interface to a medical imaging system

Original Design Menu driven Menus accessed by first letter of command Menus arranged in hierarchy

Problems with original design  Lack of consistency  D = DOS commands; Delete; Data file; Date  Hidden hierarchy  Only ‘experts’ could use  Inappropriate defaults  Setting up a scan required ‘correction’ of default settings three or four times

Initial design activity  Observation of non-technology work  Cytogeneticists inspecting chromosomes  Developed model of task  Hierarchical task analysis  Developed design principles, e.g.,  Cytogeneticists as ‘picture people’  Task flow  Task mapping

Task Model  Work flows between specific activities Patient details Administration Set up Reporting Microscope Cell sample Analysis

First “prototype” Layout related to task model ‘Sketch’ very simple Annotations show modifications

Second prototype Refined layout ‘Prototype’ using HyperCard Initial user trials compared this with a mock-up of the original design

Final Product Picture taken from company brochure Initial concepts retained Further modifications possible

Predicting Transaction Time

Predicting Performance Time  Time and error are ‘standard’ measures of human performance  Predict transaction time for comparative evaluation  Approximations of human performance

Unit Times  From task model, define sequence of tasks to achieve a specific goal  For each task, define ‘average time’

Quick Exercise  Draw two parallel lines about 4cm apart and about 10cm long  Draw, as quickly as possible, a zig-zag line for 5 seconds  Count the number of lines and the number of times you have crossed the parallel lines

Predicted result  About 70 lines  About 20 cross-overs

Why this prediction?  Movement speed limited by biomechanical constraints –Motor subsystem change 70ms –So: 5000 / 70 = 71 oscillations  Cognitive / Perceptual system cycles: 70ms 100ms –Correction takes = 240ms –5000/240 = 21

Fitts’ Law  Paul Fitts 1954  Information-theoretic account of simple movements  Define the number of ‘bits’ processed in performing a given task

Fitts’ Tapping Task W a

Fitts’ Law Movement Time = a + b (log 2 2A/W) Hits Log 2 (2A/W) 1.A = 62, W = 15 2.A = 112, W = 7 3.A = 112, W = 21 1 = = = a b a = 10 b = 27.5

Alternate Versions MT = a + b log 2 (2A/W) MT = b log 2 (A/W + 0.5) MT = a + b log 2 (A/W/+1)

a and b are “constants” Data derived from plot Data as predictors? ab Mouse Trackball

Potential Problems  Data-fitter rather than ‘law’  ‘Generic value’: a+b = 100  Variable predictive power for devices? –From ‘mouse data’ we get: (assume A = 5 and W = 10) log 2 (2A/W)  ms, 150.5ms and 34.9ms (!!)

Hick – Hyman Law  William Hick 1952  Selection time, from a set of items, is proportional to the number of items T = k log 2 (n+1), Where k = a constant (intercept+slope)  Approximately 150ms added to T for each item

Example of Hick-Hyman Law Search Time (s) words numbers Landauer and Nachbar, 1985

Keystroke Level Models  Developed from 1950s ergonomics  Human information processor as linear executor of specified tasks  Unit-tasks have defined times  Prediction = summing of times for sequence of unit-tasks

Building a KLM  Develop task model  Define task sequence  Assign unit-times to tasks  Sum times

Example: cut and paste Task Model: Select line – Cut – Select insertion point – paste Task One: select line move cursor to start of line press (hold) button drag cursor to end of line release button

Times for Movement  H: homing, e.g., hand from keyboard to mouse –Range: 214ms – 400ms –Average: 320ms  P: pointing, e.g., move cursor using mouse –Range: defined by Fitts’ Law –Average: 1100ms  B: button pressing, e.g., hitting key on keyboard –Range: 80ms – 700ms –Average: 200ms

Times for Cognition / Perception  M: mental operation –Range: 990ms – 1760ms –Average: 1350ms  A: switch attention between parts of display –Average: 320ms  R: recognition of items –Range: 314ms – 1800ms –Average: 340ms  Perceive change: –Range: 50 – 300ms –Average: 100ms

Rules for Summing Times  How to handle multiple Mental units: –M before Ks in new argument strings –M at start of ‘cognitive unit’ –M before Ps that select commands –Delete M if K redundant terminator

Alternative  What if we use ‘accelerated scrolling’ on the cursor keys? –Press  key and read scrolling numbers –Release key at or near number –Select correct number MH Pe PP’P

Critical Path Models  Used in project management  Map dependencies between tasks in a project –Task X is dependent on task Y, if it is necessary to wait until the end of task Y until task X can commence

Procedure  Construct task model, taking into account dependencies  Assign times to tasks  Calculate critical path and transaction time –Run forward pass –Run backward pass

Example MH R PP’P M = 1.35 H = 0.32 P = 0.2 R = M 1.35 H 0.32 P P’ 0.2 R P 0.2

Critical Path Table ActivityDurationESTLSTEFTLFTFloat M H P R P’ P

Comparison  ‘Summing of times’ result: –2.61s  ‘Critical path’ result: –2.47s  R allowed to ‘float’

Other time-based models  Task-network models –MicroSAINT –Unit-times and probability of transition Prompt 50ms Speak word [300  9]ms System response [1000  30]ms p 1-p

Models of Competence

Performance vs. Competence  Performance Models –Make statements and predictions about the time, effort or likelihood of error when performing specific tasks;  Competence Models –Make statements about what a given user knows and how this knowledge might be organised.

Sequence vs. Process vs. Grammar  Sequence Models –Define activity simply in terms of sequences of operations that can be quantified  Process Models –Simple model of mental activity but define the steps needed to perform tasks  Grammatical Models –Model required knowledge in terms of ‘sentences’

Process Models  Production systems  GOMS

Production Systems  Rules = (Procedural) Knowledge  Working memory = state of the world  Control strategies = way of applying knowledge

Production Systems Architecture of a production system: Rule base Working Memory Interpreter

The Problem of Control  Rules are useless without a useful way to apply them  Need a consistent, reliable, useful way to control the way rules are applied  Different architectures / systems use different control strategies to produce different results

Forward Chaining A C A B C A B If not C then GOAL If A then B If A and B then not C If not C then GOAL If A then B

Backward Chaining A C A B C A B If A then B If A and B then not C Need: not C Need B If not C then GOAL Need GOAL If A and B then not C If not C then GOAL If A then B

Production Systems  A simple metaphor Docks Ships

Production Systems  Ships must fit the correct dock  When one ship is docked, another can be launched

Production Systems

Production Rules IF condition THEN action e.g., IF ship is docked And free-floating ships THEN launch ship IF dock is free And Ship matches THEN dock ship

The Parsimonious Production Systems Rule Notation  On any cycle, any rule whose conditions are currently satisfied will fire  Rules must be written so that a single rule will not fire repeatedly  Only one rule will fire on a cycle  All procedural knowledge is explicit in these rules rather than being explicit in the interpreter

Worked Example: The Tower of Hanoi A B C 4 5

Possible Steps 1 Disc 1 from a to c Disc 2 from a to b Disc 1 from c to a Disc 3 from a to c Disc 2 from b to c Disc 1 from a to c

Worked Example: The Tower of Hanoi A B C 4 5

Possible Steps 2 Disc 4 from a to b Disc 1 from c to b Disc 2 from c to a Disc 1 from b to a Disc 2 from a to b Disc 3 from a to b

Worked Example: The Tower of Hanoi A B C 4 5

Possible Steps 3 Disc 5 from a to c Disc 1 from b to a Disc 2 from b to c Disc 1 from a to c Disc 3 from b to a Disc 1 from c to b Disc 2 from c to a Disc 4 from b to c Disc 1 from a to c Disc 2 from a to b Disc 1 from c to b Disc 3 from a to c Disc 1 from b to a Disc 2 from b to c Disc 1 from a to c

Simon’s (1975) goal-recursive logic To get the 5-tower to Peg C, get the 4-tower to Peg B, then move The 5-disc to Peg C, then move the 4-tower to Peg C To get the 4-tower to Peg B, get the 3-tower to Peg C, then move The 4-disc to Peg B, then move the 3-tower to Peg B To get the 3-tower to Peg C, get the 2-tower to Peg B, then move The 3-disc to Peg C, then move the 2-tower to Peg C, To get the 2-tower to Peg B, move the 1-disc to Peg C, then move The 2-disc to Peg B, then move the 1-disc to Peg A

Production Rule 1 SUBGOAL_DISCS IFthe goal is to achieve a particular configuration of discs AndDi is on Px but should go to Py in the configuration AndDi is the largest disc out of place AndDj is on Py And Dj is smaller than Di AndPz is clear OR has a disc larger than Dj THENset a subgoal to move the Dj tower to Pz and Di to Py

Production Rule 2 SUBGOAL_MOVE_DISC IFthe goal is to achieve a particular configuration of discs AndDi is on Px but should go to Py in the configuration AndDi is the largest disc out of place AndPy is clear THENmove Di to Py

Goals Operators Method Selection Card, Moran and Newell, 1983  Human activity modelled by Model Human Processor  Activity defined by GOALS  Goals held in ‘Stack’  Goals ‘pushed’ onto stack  Goals ‘popped’ from stack

Goals  Symbolic structures to define desired state of affairs and methods to achieve this state of affairs GOAL: EDIT-MANUSCRIPTtop level goal GOAL: EDIT-UNIT-TASKspecific sub goal GOAL: ACQUIRE UNIT-TASKget next step GOAL: EXECUTE UNIT-TASK do next step GOAL: LOCATION-LINEspecific step

Operators  Elementary perceptual, motor or cognitive acts needed to achieve subgoals Get-next-lineUse-cursor-arrow-methodUse-mouse-method

Methods  Descriptions of procedures for achieving goals  Conditional upon contents of working memory and state of task GOAL: ACQUIRE-UNIT-TASK GET-NEXT-PAGEif at end of manuscript GET-NEXT-TASK

Selection  Choose between competing Methods, if more than one GOAL:EXECUTE-UNIT-TASKGOAL:LOCATE-LINE [select:if hands on keyboard and less than 5 lines to move USE CURSOR KEYS else USE MOUSE]

Example  Withdraw cash from ATM –Construct task model –Define production rules

Task Model Method for goal: Obtain cash from ATM Step1: access ATM Step2: select ‘cash’ option Step3: indicate amount Step4: retrieve cash and card Step5: end task

Production Rules ((GOAL: USE ATM TO OBTAIN CASH) ADD-UNIT-TASK (access ATM) ADD-WM-UNIT-TASK (access ATM) ADD-TASK-STEP (insert card in slot) SEND-TO-MOTOR(place card in slot) SEND-TO-MOTOR (eyes to slot) SEND-TO-PERCEPTUAL (check card in) ADD (WM performing card insertion) ADD-TASK-STEP (check card insertion) DELETE-UNIT-TASK (access ATM) ADD-UNIT-TASK (enter PIN)

Problems with GOMS  Assumes ‘error-free’ performance –Even experts make mistakes  MHP gross simplifies human information processing  Producing a task model of non- existent products is difficult

Task Action Grammar  GOMS assumes ‘expert’ knows operators and methods for tasks  TAG assumes ‘expert’ knows simple tasks, i.e., tasks that can be performed without problem-solving

TAG and competence  Competence –Defines what an ‘ideal’ user would know  TAG relies on ‘world knowledge’ –up vs down –left vs right –forward vs backward

Task-action Grammar  Grammar relates simple tasks to actions  Generic rule schema covering combinations of simple tasks

TAG  A ‘grammar’ –maps  Simple tasks –Onto  Actions –To form  an interaction language –To investigate  consistency

Consistency  Syntactic: use of expressions  Lexical: use of symbols  Semantic-syntactic alignment: order of terms  Semantic: principle of completeness

Procedure –Step 1: Write out commands and their structures –Step 2: Determine in commands have consistent structure –Step 3: Place command items into variable/feature relationship –Step 4: Generalise commands by separating into task features, simple tasks, task-action rule schema –Step 5: Expand parts of task into primitives –Step 6: Check to ensure all names are unique

Example  Setting up a recording on a video- cassette recorder (VCR)  Assume that all controls via front panel and that the user can only use the up and down arrows

Feature list [for a VCR]  PropertyDate, Channel, Start, End  Valuenumber  FrequencyDaily, Weekly  Recordon, off

Simple tasks SetDate [Property = Date, Value = US#, Frequency = Daily] SetDate [Property = Date, Value = US#, Frequency = Weekly] SetProg[Property =Prog, Value = US#] SetStart[Property = start, Value = US#, Record = on] SetEnd[Property = start, Value = US#, Record = off]

Rule Schema 1. Task[Property = US#, Value]  SetValue [Value] 2. Task[Property = Date, Value, Frequency = US#]  SetValue [Value] + press “ | ” until Frequency = US# 3. Task[Property = Start, Value]  SetValue [Value] + press “Rec” 4. SetValue [Value = US#]  press “ | ” until Value = US# 5. SetValue[Value = US#]  use “ | ” until Value = US#

Architectures for Cognition

Why Cognitive Architecture?  Computers architectures: –Specify components and their connections –Define functions and processes  Cognitive Architectures could be seen as the logical conclusion of the ‘human-brain-as-computer’ hypothesis

Why do this?  Philosophy: Provide a unified understanding of the mind  Psychology: Account for experimental data  Education: Provide cognitive models for intelligent tutoring systems and other learning environments  Human Computer Interaction: Evaluate artifacts and help in their design  Computer Generated Forces: Provide cognitive agents to inhabit training environments and games  Neuroscience: Provide a framework for interpreting data from brain imaging

General Requirements  Integration of cognition, perception, and action  Robust behavior in the face of error, the unexpected, and the unknown  Ability to run in real time  Ability to Learn  Prediction of human behavior and performance

Architectures  Model Human Processor (MHP) –Card, Moran and Newell (1983)  ACT-R –Anderson (1993)  EPIC –Meyer and Kieras (1997)  SOAR –Laird, Rosenbloom and Newell (1987)

Model Human Processor Three interacting subsystems: Perceptual Auditory image store Visual image store Cognitive Working memory Long-term memory Motor

Parameters of MHP CapacityDecayCycle Long-term memory XX Working memory 2.5 – 9 chunks 5 – 226s Auditory image store 7 – 17 letters ms ms Visual image store letters ms Cognitive processor ms Motor processor ms Perceptual processor ms

Average data for MHP  Long-term memory:?  Working memory: 3 – 7 chunks, 7s  Auditory image store: 17 letters, 200ms  Visual image store: 5 letters, 1500ms  Cognitive processor: 100ms  Perceptual processor: 70ms  Motor processor: 70ms

Conclusions  Simple description of cognition  Uses ‘standard times’ for prediction  Uses production rules for defining and combining tasks (with GOMS formalism)

Adaptive Control of Thought, Rational (ACT-R)

Adaptive Control of Thought, Rational (ACT-R)  ACT-R symbolic aspect realised over subsymbolic mechanism  Symbolic aspect in two parts: –Production memory –Symbolic memory (declarative memory)  Theory of rational analysis

Theory of Rational Analysis  Evidence-based assumptions about environment (probabilities)  Deriving optimal strategies (Bayesian)  Assuming that optimal strategies reflect human cognition (either what it actually does or what it probably ought to do)

Notions of Memory  Procedural –Knowing how –Described in ACT by Production Rules  Declarative –Knowing that –Described in ACT by ‘chunks’  Goal Stack –A sort of ‘working memory’ –Holds chunks (goals) –Top goal pushed (like GOMS) –Writeable

Production Rules  Knowing how to do X –Production rule = set of conditions and an action IF it is raining And you wish to go out THEN pick up your umbrella

(Very simple) ACT  Network of propositions  Production rules selected via pattern matching. Production rules coordinate retrieval of chunks from symbolic memory and link to environment.  If information in working memory matches production rule condition, then fire production rule

ACT* Declarative memory Procedural memory Working memory Retrieval StorageMatch Execution OUTSIDE WORLD EncodingPerformance

Addition-Fact six U (4); T (1); H (0) eight addend1sum addend2 Knowledge Representation _____ 34 _____ 1 Goal buffer: add numbers in right-most column Visual buffer: 6, 8 Retrieval buffer: 14

Symbolic / Subsymbolic levels  Symbolic level –Information as chunks in declarative memory, and represented as propositions –Rules as productions in procedural memory  Subsymbolic level –Chunks given parameters which are used to determine the probability that the chunk is needed –Base-level activation (relevance) –Context activation (association strengths)

Conflict resolution  Order production rules by preference  Select top rule in list  Preference defined by: –Probability that rule will lead to goal –Time associated with rule –Likely cost of reaching goal when using sequence involving this rule

Example  Activity: Find target and then use mouse to select target: Hunt_Feature IF goal = find target with feature F AND there is object X on screen THEN move attention to object X Found_target IF goal = find target with feature F AND target with F in location L THEN move mouse to L and click

Example  Model reaction time to target –Assume switch attention linearly increases with each new position –Assume probability of feature X in location y = 0.53 –Assume switch attention = 185ms  Therefore, reaction time = 185 X 0.53 = 98ms per position  Empirical data has RT of 103ms per position

Example  Assume target in field of distractors –P = 0.42 –Therefore, 185 x.42 = 78ms per position  Empirical data = 80ms per position

Learning  Symbolic level –Learning defined by adding new chunks and productions  Subsymbolic level –Adjustment of parameters based on experience

Conclusions  ACT uses simple production system  ACT provides some quantitative prediction of performance  Rationality = optimal adaptation to environment

Executive Process Interactive Control (EPIC) ftp://ftp.eecs.umich.edu/people/kieras ftp://ftp.eecs.umich.edu/people/kieras

Executive Process Interactive Control (EPIC)  Focus on multiple task performance  Cognitive Processor runs production rules and interacts with perceptual and motor processors

EPIC parameters  FIXED –Connections and mechanisms –Time parameters –Feature sets for motor processors –Task-specific production rules and perceptual encoding types  FREE –Production rules for tasks –Unique perceptual and motor processors –Task instance set –Simulated task environment

EPIC Task environment Auditory Visual Speech Manual DISPLAY PERCEPTUAL PROCESSORS Auditory Visual Speech Manual Long-term memory Production memory Production Rule interpreter Working memory Tactile

Production Memory  Perceptual processors controlled by production rules  Production Rules held in Production Memory  Production Rule Interpreter applies rules to perceptual processes

Working Memory  Limited capacity (or duration of 4s) and holds current production rules  Cognitive processor updates every 50ms  On update, perceptual input, item from production memory, and next action held in working memory

Resolving Conflict  Production rules applied to executive tasks to handle resource conflict and scheduling  Conflict dealt with in production rule specification –Lockout –Interleaving –Strategic response deferent

Example Task one Stimulus one Perceptual process Cognitive process Response selection Memory process Response one Task two Stimulus two Perceptual process Cognitive process Response selection Memory process Response two Executive process Move eye to S2 Enable task1 + task 2 Wait for task1 complete Task1end Task2 permission Trial end

Conclusions  Modular structure supports parallelism  EPIC does not have a goal stack and does not assume sequential firing of goals  Goals can be handled in parallel (provided there is no resource conflict)  Does not support learning

States, Operators, And Reasoning (SOAR)

States, Operators, And Reasoning (SOAR)  Sequel of General Problem Solver (Newell and Simon, 1960)  SOAR seeks to apply operators to states within a problem space to achieve a goal.  SOAR assumes that actor uses all available knowledge in problem-solving

Soar as a Unified Theory of Cognition  Intelligence = problem solving + learning  Cognition seen as search in problem spaces  All knowledge is encoded as productions  a single type of knowledge  All learning is done by chunking  a single type of learning

Young, R.M., Ritter, F., Jones, G "Online Psychological Soar Tutorial" available at: Frank.Ritter/pst/pst-tutorial.html Frank.Ritter/pst/pst-tutorial.html Frank.Ritter/pst/pst-tutorial.html

SOAR Activity  Operators: Transform a state via some action  State: A representation of possible stages of progress in the problem  Problem space: States and operators that can be used to achieve a goal.  Goal: Some desired situation.

SOAR Activity  Problem solving = applying an Operator to a State in order to move through a Problem Space to reach a Goal.  Problem solving = applying an Operator to a State in order to move through a Problem Space to reach a Goal.  Impasse = Where an Operator cannot be applied to a State, and so it is not possible to move forward in the Problem Space. This becomes a new problem to be solved.  Soar can learn by storing solutions to past problems as chunks and applying them when it encounters the same problem again

SOAR Architecture Chunking mechanism Production memory Pattern  Action Decision procedure Working memory Manager Preferences Objects Conflict stack Working memory

Explanation  Working Memory –Data for current activity, organized into objects  Production Memory –Contains production rules  Chunking mechanism –Collapses successful sequences of operators into chunks for re-use

3 levels in soar  Symbolic – the programming level –Rules programmed into Soar that match circumstances and perform specific actions  Problem space – states & goals –The set of goals, states, operators, and context.  Knowledge – embodied in the rules –The knowledge of how to act on the problem/world, how to choose between different operators, and any learned chunks from previous problem solving

How does it work?  A problem is encoded as a current state and a desired state (goal)  Operators are applied to move from one state to another  There is success if the desired state matches the current state  Operators are proposed by productions, with preferences biasing choices in specific circumstances  Productions fire in parallel

Impasses  If no operator is proposed, or if there is a tie between operators, or if Soar does not know what to do with an operator, there is an impasse  When there are impasses, Soar sets a new goal (resolve the impasse) and creates a new state  Impasses may be stacked  When one impasse is solved, Soar pops up to the previous goal

Learning  Learning occurs by chunking the conditions and the actions of the impasses that have been resolved  Chunks can immediately used in further problem-solving behaviour

The Switchyard video

Conclusions  It may be too "unified" –Single learning mechanism –Single knowledge representation –Uniform problem state  It does not take neuropsychological evidence into account (cf. ACT-R)  There may be non-symbolic intelligence, e.g. neural nets etc not abstractable to the symbolic level

Comparison of Architectures ACT-REPICSOAR TypeHybridSymbolicSymbolic Theory Rational analysis Embedded cognition Problem solving Basis Cog. Psy. HCIAI LTM Productions; facts Productions WM Goal stack Working memory; sensory stores Working memory LearningYesNoYes

The Role of Models in Design

User Models in Design  Benchmarking  Human Virtual Machines  Evaluation of concepts  Comparison of concepts  Analytical prototyping

Benchmarking  What times can users expect to take to perform task –Training criteria –Evaluation criteria (under ISO9241) –Product comparison

Human Virtual Machine  How might the user perform? –Make assumptions explicit –Contrast views

Evaluation of Concepts  Which design could lead to better performance? –Compare concepts using models prior to building prototype –Use performance of existing product as benchmark

Reliability of Models  Agreement of predictions with observations  Agreement of predictions by different analysts  Agreement of model with theory

Comparison with Theory  Approximation of human information processing  Assumes linear, error-free performance  Assumes strict following of ‘correct’ procedure  Assumes only way correct procedure  Assumes actions can be timed

KLM Validity Predicted values lie within 20% of observed values

Comparison of KLM predicted with times from user trials Total time (s) Trial number CUI: P = 15.84s mean = 15.37s Error = 2.9% GUI: P = 11.05s mean = 8.64s Error = 22%

Inter / Intra-rater Reliability  Inter-rater: –Correlation of several analysts –=  Intra-rater: –Correlation for same analysts on several occasions –=0.916  Validity: –correlation with actual performance –= Stanton and Young, 1992

How compare single data points?  Models typically produce a single prediction  How can one value be compared against a set of data?  How can a null hypothesis be proved?

Liao and Milgram (1991) A-D-  *sd A-D A-D+  *sd A A+D-  *sd A+D A+D+  *sd D

Defining terms  A = Actual values, with observed standard deviation (sd)  D = Derived values   = 5% (P < 0.05 to reduce Type I error)   = 20% (P<0.2 for Type II error)

Acceptance Criteria Accept Ho if: A-D+  *sd < D< A+D-  *sd Reject Ho if: D < A-D-  *sd Reject Ho if: D > A-D+  *sd

Analytical Prototyping  Functional analysis  Define features and functions  Development of design concepts, e.g., sketches and storyboards  Scenario-based analysis  How people pursue defined goals  State-based descriptions  Structural analysis  Predictive evaluation  Testing to destruction

Analytical Prototyping  Functional analysis  Scenario-based analysis  Structural analysis

Rewritable Routines  Mental models –Imprecise, incomplete, inconsistent  Partial representations of product and procedure for achieving subgoal  Knowledge recruited in response to system image

Simple Architecture Current State Action to change machine state Rewritable Routines Goal State Possible States Relevant State Next State

Global Prototypical Routines  Stereotyped Stimulus-Response compatibilities  Generalisable product knowledge

State-specific Routines  Interpretation of system image –Feature evolution  Expectation of procedural steps  Situated / Opportunistic planning

Describing Interaction  State-space diagrams  Indication of system image  Indication of user action  Prediction of performance

State-space Diagram 0 Waiting for: Raise lid Waiting for: Play Mode Waiting for: Enter Waiting for: Skip forward Waiting for: Skip back Waiting for: Play Waiting for: Stop Waiting for: Off Task: Press ‘Play’ Time: 200ms Error: State 1 State number System image Waiting for… Transitions

Defining Parameters Activity (times) Error P(novice ) P(expert) Recall Plan (1380ms) Wrong plan Select (360ms) Select wrong item Press (200ms) Fail to press Read (180ms) Misinterpret

Developing Models P=0.997 P=0.74 P=0.003 P=0.26 P= P= P=1 Recall plan: 1380ms Press play: 200ms Press Playmode: 200ms Wrong plan: 1380ms Cycle through menu: 800ms Switch off: 300ms Press Enter: 0ms Press Other Key: 200ms Press Playmode: 200ms Press Play: 0ms Start: 0ms

Results

What is the point?  Are these models useful to designers?  Are these models useful to theorists?

Task Models - problems  Task models take time to develop –They may not have high inter-rater reliability –They cannot deal easily with parallel tasks –They ignore social factors

Task Models - benefits  Models are abstractions – you always leave something out  The process of creating a task model might outweigh the problems  Task models highlight task sequences and can be used to define metrics

Task Models for Theorists  Task models are engineering approximations –Do they actually describe how human information processing works?  Do they need to? –Do they describe cognitive operations, or just actions?

Some Background Reading Dix, A et al., 1998, Human-Computer Interaction (chapters 6 and 7) London: Prentice Hall Anderson, J.R., 1983, The Architecture of Cognition, Harvard, MA: Harvard University Press Card, S.K. et al., 1983, The Psychology of Human- Computer Interaction, Hillsdale, NJ: LEA Carroll, J., 2003, HCI Models, Theories and Frameworks: towards a multidisciplinary science, (chapters 1, 3, 4, 5) San Francisco, CA: Morgan Kaufman