Presentation is loading. Please wait.

Presentation is loading. Please wait.

Component-specific usability testing Dr Willem-Paul Brinkman Lecturer Department of Information Systems and Computing Brunel University

Similar presentations


Presentation on theme: "Component-specific usability testing Dr Willem-Paul Brinkman Lecturer Department of Information Systems and Computing Brunel University"— Presentation transcript:

1 Component-specific usability testing Dr Willem-Paul Brinkman Lecturer Department of Information Systems and Computing Brunel University (willem.brinkman@brunel.ac.uk)

2 Topics  Introduction  Whether and how the usability of components can be tested empirically. -Testing different versions of component -Testing different components  Whether and how the usability of components can be affected by other components. -Consistency -Memory load

3 Introduction Component-Based Software Engineering Empirical Usability Testing

4 Layered communication

5 Layered Protocol Theory (Taylor, 1988) 15 + 23 = 15+23= 01111 10111 Add 100110 38 ProcessorEditor Control results Control equation UserCalculator 15 15 + 15 + 23 38

6 Usability Testing Aim to evaluate the usability of a component based on the message exchange between a user and a specific component

7 Two paradigms  Multiple versions testing paradigm  Single version testing paradigm Manage Support Re-use Create

8 Test Procedure  Normal procedures of a usability test  User task which requires interaction with components under investigation  Users must complete the task successfully

9 Component-specific component measures Number of messages received The effort users put into the interaction Objective performance Perceived ease- of-use Perceived satisfaction Component Control process Control loop

10 Component-specific component measures Increasing the statistical power Objective performance Perceived ease- of-use Perceived satisfaction y y 1 = x k +  k y 2 = x m +  m  k =  k component +  k rest  m =  m component +  m rest Assumption  k rest   m rest messages keys

11 Component-specific component measures Objective performance Perceived ease-of-use Perceived satisfaction Component-specific questionnaire increase the statistical power because they help help the users to remember their control experience with a particular interaction component

12 Component-specific component measures Objective performance Perceived ease-of-use Perceived satisfaction Perceived Usefulness and Ease-of-use questionnaire (David, 1989), 6 questions, e.g.  Learning to operate [name] would be easy for me.  I would find it easy to get [name] to do what I want it to do. UnlikelyLikely

13 Component-specific component measures Objective performance Perceived ease- of-use Perceived satisfaction Post-Study System Usability Questionnaire (Lewis, 1995)  The interface of [name] was pleasant.  I like using the interface of [name].Strongly disagreeagree

14 Experimental validation 80 users 8 mobile telephones 3 components were manipulated according to Cognitive Complexity Theory (Kieras & Polson, 1985) 1.Function Selector 2.Keypad 3.Short Text Messages

15 Architecture Mobile telephone Send Text Message Send Text Message Function Selector Function Selector Keypad

16 Experimental validation Functions Selector Broad/shallow Narrow/deep

17 Experimental validation Keypad Repeated-Key Method “L” Modified-Model-Position method “J”

18 Experimental validation Send Text Message Simple Complex

19 Results Average probability that a measure finds a significant (α = 0.05) effect for the usability difference between the two versions of FS, STM, or the Keypad components

20 Wilcoxon Matched-Pairs Signed-Ranks Tests between the number of correct classification made by discriminant analyses on overall and component-specific measures Results

21 Topics  Introduction  Whether and how the usability of components can be tested empirically. -Testing different versions of component -Testing different components  Whether and how the usability of components can be affected by other components. -Consistency -Memory load

22 Two paradigms  Multiple versions testing paradigm  Single version testing paradigm Manage Create Support Re-use

23 Testing Different Components Component specific objective performance measure: 1.Messages received + Weight factor A common currency 2.Compare with ideal user A common point of reference Usability of individual components in a single device can be compared with each other and prioritized on potential improvements

24 Click {1} Click {1} Call <>{2} Set <Fill colour red, no border> {7} Right Mouse Button Menu Properties Assigning weight factors to represent the user’s effort in the case of ideal user

25 Total effort value Total effort =  MR i.W MR i.W : Message received. Weight factor Click {1} Click {1} Call <>{2} Right Mouse Button Menu Properties 5 2 = 7 + 2

26 Assigning weight factors in case of real user Correction for inefficiency of higher and lower components Visual Drawing Objects Properties Right Mouse Button Menu

27 Assigning weight factors in case of real user Assign weight factors as if lower components operate optimal Visual Drawing Objects Properties Right Mouse Button Menu Inefficiency of lower level components: need more messages to pass on a message upwards than ideally required

28 Assigning weight factors in case of real user Visual Drawing Objects Properties Right Mouse Button Menu Inefficiency of higher level components: more messages are requested than ideally required UE: User effort MR i.W : Message received. Weight factor #MSU real :Number of messages sent upward by real user #MSU ideal :Number of messages sent upward by ideal user  MR i.W #MSU real  #MSU ideal UE =

29 Ideal User versus Real User Extra User Effort = User Effort - Total effort The total effort an ideal user would make The total effort a real user made The extra effort a real user made Calculate for each component: Prioritize

30 Experimental validation 40 users 40 mobile telephones 2 components were manipulated (Keypad only Repeated-Key Method) 1.Function Selector 2.Short Text Messages

31 Results Mobile phones Extra User Effort

32 Results MeasureFunction Selector Send Text Message Objective Extra keystrokes0.64**0.44** Task duration0.63**0.39** Perceived Overall ease-of-use-0.43**-0.26* Overall satisfaction-0.25*-0.22 Component-specific ease-of-use-0.55**-0.34** Component-specific satisfaction-0.41**-0.37** Partial correlation between extra user effort regarding the two components and other usability measures *p. <.05. **p. <.01.

33 Comparison with other evaluation methods Overall measures Sequential Data analysis GOMS Thinking-aloud, Cognitive Walkthrough and heuristic evaluation Example: Keystrokes, task duration, overall perceived usability Relatively easy to obtain Unsuitable to evaluate components

34 Overall measures Sequential Data analysis GOMS Thinking-aloud, Cognitive Walkthrough and heuristic evaluation Based only on lower-level events Pre-processing: selection, abstraction, and re-coding Relation between higher- level component and compound message less direct Components’ status not recorded Comparison with other evaluation methods

35 Help to understand the problem Only looking at error-free task execution Considers the system only at the lowest-level layer Overall measures Sequential Data analysis GOMS Thinking-aloud, Cognitive Walkthrough and heuristic evaluation Comparison with other evaluation methods

36 Quicker Evaluator effect (reliability) Overall measures Sequential Data analysis GOMS Thinking-aloud, Cognitive Walkthrough and heuristic evaluation Comparison with other evaluation methods

37 Topics  Introduction  Whether and how the usability of components can be tested empirically. -Testing different versions of component -Testing different components  Whether and how the usability of components can be affected by other components. -Consistency -Memory load

38 Consistency problems

39 Consistency Activation of the wrong mental model

40 Consistency experiments  48 Users  Used 3 applications: 1.4 Room Thermostats 2.4 (2 Web-Enabled TV sets  2 Web Page Layouts) 3.4 Applications (2 Timers  2 Application domains)

41 Within one layer

42 Within one layer – Experimental Design Day time Temperature Night time Temperature Moving Pointer Moving Scale Moving Pointer Moving Scale

43 Within on layer - Results

44 Between layers Web-enable TV set Browser versus Web pages

45 Between layers - Page Layout List layout Matrix layout

46 Between layers - Browser

47 Between layers – Experimental Design Web Page Version Browser List Matrix Linear Plane

48 Between layers - Results

49 Application domain

50 Between Application domain – Experimental Design Application Timer Alarm radio Microwave Mechanical alarm Hot dish

51 Application domain - Results

52 Topics  Introduction  Whether and how the usability of components can be tested empirically. -Testing different versions of component -Testing different components  Whether and how the usability of components can be affected by other components. -Consistency -Memory load

53 Mental effort problems

54 Mental Effort - Calculator ProcessorEditor Control results Control equation UserCalculator

55 Memory load – Experimental Design Equation Editor Easy Difficult Large display Small display

56 Mental Effort - Heart-rate variability

57 Mental Effort - Control of higher- level layer

58 Conclusions  Whether and how the usability of components can be tested empirically. -Testing different versions of component : more powerful -Testing different components : prioritized on potential improvements  Whether and how the usability of components can be affected by other components. -Consistency : components on the same or on higher- level layers can activate wrong mental models -Memory load : lower-level interaction affects higher- level interaction strategy

59 Questions Thank you for your attention


Download ppt "Component-specific usability testing Dr Willem-Paul Brinkman Lecturer Department of Information Systems and Computing Brunel University"

Similar presentations


Ads by Google