Presentation on theme: "Issues Related to Design of Automatic Systems - from a Human Factors Perspective - Ann Britt Skjerve Institute for Energy Technology Human-Centered Automation."— Presentation transcript:
Issues Related to Design of Automatic Systems - from a Human Factors Perspective - Ann Britt Skjerve Institute for Energy Technology Human-Centered Automation
Content Part I: Overview Background and definition The operators’ role Keys Research Issues 1.Task allocation 2.Human-system interface design 3.Effects on the individual and the organisation Part II: Two examples on interface design studies Human-Centred Automation Experiments Extended Teamwork Study Domain: High-risk systems.
Background 1952: The term automation was applied in an article in Scientific American Mechanization of human labour –Overcome human capacity problems –Automation of “physical” tasks –Routine tasks Production increase Complex and safety critical tasks –Automation of “control” tasks (Crossman, 1974) –Automation of “management” tasks (Billings, 1991)
Definition(s) Oxford English Dictionary: Automatic control of the manufacture of a product through a number of successive stages; The application of automatic control to any branch of industry or science; By extension, the use of electronic or mechanical devices to replace human labor. automation
Why are Humans still in High-Risk Systems? Not all tasks can be automated... –Degree of proceduralization –Automation may fail –Technology Cost effectiveness Legal requirements Public opinion
Classification Systems: Ten Levels of Automation – an example (Sheridan, 1980) 1.Human considers alternatives, makes and implements decision. 2.Computer offers a set of alternatives which human may ignore in making decision. 3.Computer offers a restricted set of alternatives, and human decides which to implement. 4.Computer offers a restricted set of alternatives and suggests one, but human still makes and implements final decision. 5.Computer offers a restricted set of alternatives and suggests one, which it will implement if the human approves. 6.Computer makes decision, but gives human option to veto before implementation. 7.Computer makes and implements decision, but must inform human after the fact. 8.Computer makes and implements decision, and informs human only if asked to. 9.Computer makes and implements decision, and informs human only if it feels this is warranted. 10.Computer makes and implements decision if it feels it should, and informs human only if it feels this is warranted. Degree of computer participation LOW HIGH
Effects of Automation 1/2 Some positive effects –Increased production levels Automatic train control (ATC) Trains: Faster and with shorter distances between Each new generation of commercial aircrafts has improved on the safety record of its predecessors –Automation as a “key” element of competitiveness”
Aircraft Generations Accident rate for 3 generations of aircrafts (Airbus Industry Safety Department “Hangar Flying”, June 1997, as referred in Pariès and Amalberti, 1999).
Effects of Automation 2/2 Some negative effects associated with automation use –Increased complexity for the human operator: –Reduced safety margins –Operators are left to deal with automation malfunctions –Reduced possibility for practising operational skills
Key Research Issues - From a HF Perspective 1. Task allocation –How should tasks be allocated between humans and machine? 2. Design of the human-system interface –How should the human-system interface be designed to support the operators’ performance? 3. Effects on the individual and the organisation –How is the individual and the organisation affected by automation?
Task Allocation How should tasks be allocated between humans and machine? Three strategies for task allocation The Left-Over Principle The Comparison Principle The Complementary Principle
The Left-Over Principle 1/2 Operators are the most unreliable element To the extent possible operators should be eliminated from the production process Automate everything that can be automated The tasks that cannot be automated (i.e., fully proceduralised) are left-over to the operators. “To improve the reliability of NPP’s, it is primarily effective to automate the hardware as much as possible and to eliminate to the maximum extent human intervention by recognition, judgement and response to information.” (Inoue et al., 1991, 449) Example:
The Left-Over Principle 2/2 Critique –Tasks are left-over to the operators without considering human capacity issues Vigilance Work load Cognitive requirements ”... the designer who tries to eliminate the operator still leaves the operator to do the tasks which the designer cannot think how to automate.” (Bainbridge, 1993)
The Comparison Principle 1/2 Human operators and automatic systems have different capabilities Allocate the tasks to the ’agent’ that is better suited to perform the task Fitts’ List (1951)
The Comparison Principle 2/2 Critique – Tasks are allocated without consideration for the overall tasks performance process of the human operators –The overall operator tasks may not correspond to human capacity Etc.
The Complementary Principle 1/2 CRITIQUE OF THE LEFT-OVER AND THE COMPARISON PRINCIPLES: Considerations are given for how the different task elements should be allocated, not for how the human and the automatic system should perform the task together. Optimal task allocation is achieved by ensuring that the performance of the operators and the automatic system complement each other –How will the automatic system and the human operators most efficiently perform the operational task?
The Complementary Principle 2/2 Critique –Task performance is a dynamic process It can be difficult to foresee in advance how a task performance process will progress, and thus how humans and automation may most efficiently complement each other –The limits of technology vs. the apparent adaptability of humans.
Design of the human-system interface How should the human-system interface be designed to support the operators’ performance? Changed operator role: – From primarily involving operation to primarily involving supervision and deviation handling.
Human-System Interface Design Issues 1/2 Difficulties associated with human-automation interaction: –Monitoring load –Vigilance –Workload distribution –Silent automation –‘Automation surprises’ “After three decades of highly prolific research on human vigilance, we are still making the same seemingly contradictory statement: a human being is a poor monitor, but that is what he or she ought to be doing.” (Wickens, 1992)
Human-System Interface Design Issues 2/2 Representation of the systems activity, current problems : –Physical and mental Isolation (Norman, 1990) Isolated from the moment-to-moment activity –Workload distribution: Too high or too low workload –Increased complexity: Understanding what happens in situations with deviations –”Out-of-the-loop” Technical design, current issues in terms of Human Factors : –Compensatory activity, may hide deviations to the operators –Reduced time-span to handle deviations
Human-Centred Design Human-Centred Design (Rouse, 1991) Three central attributes: –It focuses on the roles of humans in complex systems –Design objectives are elaborated in terms of humans’ roles –Specific design issues that follow from these objectives Three primary objectives: –To enhance human abilities. –To help overcome human limitations. –To foster user acceptance. Example, approach: “...the purpose of a pilot is not to fly the airplane that takes people from A to B – instead, the purpose of the airplane is to support the pilot who is responsible for taking people from A to B.” (Rouse, 1991)
Human-Centred Automation 1/2 Human-Centred Automation (HCA): Definition: Automation designed to work cooperatively with the human operators in the pursuit of stated objectives.” (Billings, 1991) Assumption: The human operator should always constitute the starting point in a design process, because the operator ultimately is responsible for the performance outcome
Human-Centred Automation 2/2 The HCA design principles: The human operator must be in command: 1)To command effectively, the human operator must be involved. 2)To be involved, the human operator must be informed. 3)The human operator must be able to monitor the automated systems. 4)Automated systems must be predictable. 5)The automated systems must also be able to monitor the human operator 6)Each element of the system must have knowledge of the others’ intent. (Billings, 1991, 1997)
HCA – Possible Implications, Aviation Humans must remain in command of flight and air traffic operations Automation can assist by providing a range of management options. –Human operators must remain involved Automation can assist by providing better and more timely information. –Human operators must be better informed Automation can assist by providing explanations of its actions and intentions. –Human operators must do a better job of anticipating problems Automation can assist by monitoring trends and providing decision support. –Human operators must understand the automation provided to them Designers can assist by providing simpler, more intuitive automation. –Human operators must manage all of their resources effectively. Properly designed and used, automation can be their most useful resource (Billings, 1991, 1997)
The gap between user-centred intentions and technology-centred development Some causes: Oversimplify the pressure and task demands from the users’ perspective Assume that people can and will call to mind all relevant knowledge Are overconfident that they have taken into account all meaningful circumstances and scenarios Assume that machines never err Make assumptions about how technology impacts on human performance without checking for empirical support or despite contrary evidence Define design decisions in terms of what it takes to get the technology to work Sacrifice user-oriented aspects first when tradeoffs arise Focus on building the system first, then trying to integrate the results with users. (Sarter, Woods and Billings, 1997)
Effects on the individual and the organisation How is the individual and the organisation affected by automation? Changed operator role: – New design New ways of working…
Organizational Issues Changes in work content Changes in work practices –Changes in the lines of authority –Changes in the responsibility of staff members Changes related to status (-> self-esteem) Motivation Job satisfaction Safety Education and Training Possibility for intervening Willingness to intervene Will the system in practice fulfil the goals it was designed to fulfil?
TWO EXAMPLES Focusing on Interface Design Part II Interface Design –Starting point: Task allocation has been decided (see ISO model) –Question: How should the human- system interface be designed to support human-automation transaction? Two Research Programs –Human-Centred Automation –Extended Teamwork Control Centre Design and Modification Process, (based on ISO Std. 11064-1, 2000).
IFEs Human-Centered Automation (HCA) Research Program
Motivation: Providing a better understanding of how operators’ performance is influenced by automation to reduce the negative effects of automation. Main Issues: To develop theories on how automation may influence operator performance, based on experimental studies. To develop measures for studying human- automation interaction. Specific Goal: Develop HCA design support. Introduction to the HCA Program, cont.
Research question: How do operators handle two types of automation malfunctions when operating from human-machine interfaces, which contain either explicit or implicit information about the activities of the automatic system? Automation:Defined as: Interlocks, limitations, protections, controllers, programs Independent variables: (1) Automation Malfunction Type (2) Automation-Information Presentation Type The HCA-2000/2001 Experiments
Design of Human-System Interface Basic Representation Types Implicit: Representation of a device’s activity through its effect on something else Explicit: Direct representation of a device’s activity Basic Representation Forms (include) Text Graphic Sound Etc.
Automation-Information Presentation How may the activity of the automatic system be represented explicitly ? – Explicit presentation of main activities – Graphic feedback – Verbal semantically meaningful feedback – Intentional agent
Conventional Interface Main process components Main process flow Control formats Experimental Interface + Main automatic devices + Computerized logic diagrams + Verbal feedback Two Automation-Information Presentation Types Program ’A3’ is starting up
Study performed in HAMMLAB (NORS) Licensed operators from the Loviisa NPP Six crews of three operators (RO/TO/SS) Four scenarios - a basic scenario combined with two sets of automation malfunctions Two experimental manipulations Two breaks in each scenario Overview of the HCA-2001 Experiment
Main Characteristics 2x2(x3) within-subject design Counterbalancing of the presentation order and the combination of the experimental conditions across crews Psychometric evaluation of response data before hypothesis testing [construct validity, inter-item reliability, criterion validity]. ANOVA for statistical hypothesis testing. Hypothesis test performed at the crew-scenario level. Changes Introduced: Malfunctions re-sat after each scenario period (20 min). Basis scenario: One turbine synchronised. Inclusion of a shift supervisor. Experimental Design
The Effects of the Interface Manipulation Workload
Interpretation The beneficial effects of the experimental AIP interface are significant. HCA-2000 and HCA-2001 Detailed information about the automatic system’s activity Graphically Verbal feedback Complexity <> the number of items represented !
The Extended Teamwork 2004/2005 Exploratory Study - Focusing on Human-Automation Co-operation
Background: Upcoming Industry Needs New operational concepts are currently being debated in the domain of nuclear power plants: multiunit operations by a single operator remote unit operations by a single crew no full-time, on-duty operations staff, occasional operations tasks are performed by other functional units (e.g., engineering or maintenance) reduced staff with an individual for multiple reactors and decentralized functional groups for maintenance and emergency Etc. A substantial increase in the automation level Changed staff roles Staff reduction? New requirements associated with operation New tools NPP process Increased levels of autonomy and authority
Research Question Home Plant Pre- Experimental Control-Room Solution Home Plant Training Exploratory Study First scenario Last scenario Field visits Background for understanding the point of transition Period with increased familiarization The purpose of the Extended Teamwork Research Program: To contribute with knowledge on how new operational concepts may affect the quality of teamwork in an operational team. The purpose of the Extended Teamwork Study: To assess how familiarity with operation in a subset of a particular (see prev. slide) new operational environment may affect teamwork.
Teamwork - Theory Co-operation Theory or Social-Interdependence Theory …how people believe their goals to be related to the goals of other people is a useful way to understand dynamics of the interaction between humans and its consequences… Attributes of teamwork share information openly take one another’s perspective communicate and influence each other exchange resources assist and support one another handle conflicts efficiently
Types of Teamwork Types of teamwork considered: – Teamwork between humans Co-operation Across Distances – “Teamwork” between humans and automation Human-Centred Automation – Extended teamwork Teamwork-knowledge framework Extended teamwork: a distinguishable set, at a minimum, two human agents and a machine agent who interact dynamically, interdependently and adaptively toward a common goal.
Main Characteristics of the Study Preliminary Experimental Facilities: HAMMLAB and the VR-lab Participants: 6 crews of licensed NPP operators, Swedish NPPs crew : – Reactor operator (RO) – Shift-supervisor (SS) – Field operator (FO) Scenarios 12 scenarios, 40 minutes, minor disturbances, requiring co-operation The presentation order of the scenarios was randomised. Real Role Exp. Role Assumed location (Lab) RO or SSControl Room Operator Remote Control-Center (HAMMLAB) RO or SSSite Co-ordinatorOn-site (VR-Lab) FO TechnicianOn-site (VR Lab) Experimental Team-Composition:
Key Measures: Human-Automation Interviews –Following training and following completion of the 12 scenarios. –Expectations and lessons learned Human-Automation Co-operation Quality Trust in Automation –Questionnaires –Operators’ subjective judgements Teamwork quality –Process expert’s rating of teamwork Operators’ ability to detect critical events –Operators ability to detect predefined critical events.
The Automatic Agents Main program Part program Condition for execution List of sequences Have performed Currently Performs Will be performed Selection of agent Latest voice message Action suggestion
Results: Familiarization Human-Auto Co-op. Collective Efficacy Trust in Auto TeamBars (expert) Human-Human Co-op.
Results: Human-Agent Co-operation Interviews CROs view on the Agents – Initially a rather negative view on the Agents: (1) Usefulness and (2) Co-operability. – After the 12 scenarios, a much more positive view: (1) Necessary, (2) Co-operative, but (3) Context sensitivity should be increased. In general, the CROs feel lonely – Misses support, in particular in situations with deviations. Human-Agent Transactions The level of Agent use is scenario dependent. – Total Freeze Time demonstrated a significant correlation with the operators’ subjective judgment of human-automation co-operation quality. The higher the level of human-agent transaction, the better the operator is able to detect critical occurrences. A Control-Room Operator (CRO)at work.
Implications: Design of Automatic Agents When designing Automatic Agents, the following would be useful: Application of verbal feedback. –A similar result was obtained in the “Human-Centred Automation” experiments. “Action suggestions” function –Is particularly useful in situations that the operators do not encounter often. “Freeze” function –Is a must for the operators’ to remain in control. “Repeat message” function –Is a must when verbal feedback is applied. The operators’ need of the Agents seems to facilitate their acceptance of the Agents. The Human-Agent Interface. In addition from the HCA experiments: Complexity is not determined by the number of items represented at the interface…
Summary of the Main Issues Three task allocation principles –Left-Over –Comparison –Complementary Human-System Interface Design –Silent interfaces –Representation, feedback –Human-Centred designs –Human-Centred Automation Effects on the individual and the organisation –Deskilling –Organisational changes –Affect how and the extent to which a system will be used. Examples: –IFEs Human-Centred Automation Program –IFEs Extended Teamwork Program - Teamwork where automatic agents are team members?