Presentation is loading. Please wait.

Presentation is loading. Please wait.

Explanation and Trust for Adaptive Agents Alyssa Glass Joint work with Deborah McGuinness (Stanford), Michael Wolverton (SRI), and.

Similar presentations


Presentation on theme: "Explanation and Trust for Adaptive Agents Alyssa Glass Joint work with Deborah McGuinness (Stanford), Michael Wolverton (SRI), and."— Presentation transcript:

1 Explanation and Trust for Adaptive Agents Alyssa Glass glass@ksl.stanford.edu Joint work with Deborah McGuinness (Stanford), Michael Wolverton (SRI), and Paulo Pinheiro da Silva (UTEP)

2 Outline  Explanation in Adaptive Agents  Explanation & Trust: The User Perspective  Explaining Learned Information  Discussion & Future Work

3 Why do we need explanations?  Systems are getting more complex  Hybrid and distributed processing  Multiple learning components  Multiple heterogeneous, distributed information sources  Highly variable reliability of information sources  Less transparency of system computation and reasoning  Systems are taking more autonomous control  Guide/assist user actions  Perform autonomous actions on behalf of user  “reason, learn from experience, be told what to do, explain what they are doing, reflect on their experience, and respond robustly to surprise” * * DARPA PAL program: http://www.darpa.mil/ipto/programs/pal/

4 One Complex System: CALO  Cognitive Assistant that Learns and Organizes  Personal office assistant, tasked with:  Noticing things in the cyber and physical environments  Aggregating what it notices, thinks, and does  Executing, adding/deleting, suspending/resuming tasks  Planning to achieve abstract objectives  Anticipating things it may be called upon to do or respond to  Interacting with the user  Adapting its behavior in response to past experience, user guidance  Contributed to by 22 different organizations

5 Working with a Cognitive Assistant  CALO users need to  Understand system behavior and responses  Trust system reasoning and actions  To believe and act on recommendations from CALO, users need ways of exploring how and why the system acted, responded, recommended, and reasoned the way it did.  Additional wrinkle: CALO knowledge, behavior, and assumptions are constantly changing through several forms of machine learning. A unified framework for explaining behavior and reasoning is essential for users to trust and adopt cognitive assistants.

6 Thank you!

7 The Integrated Cognitive Explanation Environment (ICEE)  Unified framework for explaining logical and task reasoning.  Applicable to multiple task execution systems.  Leverage existing InferenceWeb work for generating formal justifications.  Underlying task reasoning useful beyond explanation.  Provide sample implementation of end-to-end system.

8 Getting an Explanation : Why are you doing ? : I am trying to do and is one subgoal in the process. : Why are you doing ? : Why haven’t you completed yet? : Why is a subgoal of ? : When will you finish ? : What sources did you use to do ? Initial request and answer strategy Follow-up questions for mixed initiative dialogue

9 Explanation Example Sample question type: task motivation Why are you doing ? Strategy: reveal task hierarchy I am trying to do and is one subgoal in the process. Alternate strategies:  Provide task abstraction  Expose preconditions  Expose termination conditions  Reveal meta-information about task dependencies  Explain provenance related to task preconditions or other knowledge Possible follow-up suggestions:  Request additional detail  Request clarification of of the given explanation  Request an alternate strategy to the original query McGuinness, D.L.; Glass, A.; Wolverton, M.; Pinheiro da Silva, P. A Categorization of Explanation Questions for Task Processing Systems. AAAI Workshop on Explanation-Aware Computing (ExaCt-2007), Vancouver, Canada, 2007.

10 Sample Interface Linked to ICEE

11 Outline  Explanation in Cognitive Agents  Explanation & Trust: The User Perspective  Explaining Learned Information  Discussion & Future Work

12 Study Procedure  14 participants, in 2 groups  12 men, 2 women  Wide range of ages, education, previous CALO experience  Assigned tasks to accomplish with CALO (many scripted)  Told about trust study in advance  Structured interview format  Identified 8 themes, in 3 major categories

13 Usability Theme 1: Basic usability is important even in prototype- level systems. “I can’t tell you how much I would love to have [the system]. But I also can’t tell you how much I can’t stand it.”

14 Usability Theme 2: Learning algorithms can give the impression that the user is being ignored. “You specify something, and [the system] comes up with something completely different. And you’re like, it’s ignoring what I want!”

15 Explanations Theme 3: Users are consistent in wanting to ask specific questions, particularly when they are surprised by responses or failures. Commonly requested questions:  Documentation questions: How do I do this? Why can’t I do this other thing? What do you mean by this?  Explanation questions: What can I do now? What is happening now and why? When will you finish? “If there had been an option to ask a question, I would have loved to ask a question.” “I asked [‘Why?’] all the time, but I wasn’t getting answers!”

16 Explanations Theme 4: The granularity of feedback is important. “I don’t just want an idiot light.”

17 Trust Theme 5: Users don’t trust opaque systems; they want transparency. “The ability to check up on the system, ask it questions, get transparency to verify what it is doing, is the number one thing that would make me want to use it.”

18 Trust Theme 6: Access to information provenance can improve trust in both the information and the automated reasoning. “[The system] needs a better way to have a meta-conversation.”

19 Trust Theme 7: Like in politics and the economy, gaining user trust relies on properly managing expectations. “I was paralyzed with fear about what it would understand and what it would not.”

20 Trust Theme 8: Most users have a “trust but verify” attitude that makes system autonomy difficult without explainable verification. “I trust [the system’s] accuracy, but not its judgment.”

21 How Explanation Can Help  T1: Basic Usability  T2: Being Ignored  T3: Questions  T4: Granularity of Feedback  T5: Transparency  T6: Provenance  T7: Managing Expectations  T8: Autonomy & Verification

22 How Explanation Can Help  T1: Basic Usability  T2: Being Ignored  T3: Questions  T4: Granularity of Feedback  T5: Transparency  T6: Provenance  T7: Managing Expectations  T8: Autonomy & Verification

23 How Explanation Can Help  T1: Basic Usability  T2: Being Ignored  T3: Questions  T4: Granularity of Feedback  T5: Transparency  T6: Provenance  T7: Managing Expectations  T8: Autonomy & Verification

24 Outline  Explanation in Cognitive Agents  Explanation & Trust: The User Perspective  Explaining Learned Information  Discussion & Future Work

25 The Use-Ask-Understand-Update Cycle UseAsk UnderstandUpdate

26 Learning by Instruction  Relatively straight-forward to explain  Store instruction, resulting modification  Strategies present instruction and related meta-information  Demonstrated in CALO with Tailor task learning system

27 Learning by Demonstration  Generalizes user’s demonstration to learn a procedure  One data point --> generalization will sometimes be wrong  Specifically, it will occasionally over generalize  Generalize the wrong variables, or too many variables  Produce too general a procedure because of a coarse- grained type hierarchy  Explain the relevant aspects of the generalization process  To help the user identify and correct over generalizations  To help the user understand and trust the learned procedures  Working with LAPDOG task learning system in CALO

28 Support-Vector Machines  Augment SVM to gather additional meta-information about the SVM itself:  Support vectors identified by SVM  Support vectors nearest to the query point  Margin to the query point  Average margin over all data points  Non-support vectors nearest to the query point  Kernel transformation used, if any  Represent SVM learning and meta-information as justification in PML, using added SVM rules  Design abstraction strategies for presenting justification to user as a similarity-based explanation  Demonstrated in CALO with PLIANT preference learning system  PLIANT uses user-elicited preferences and past choices to learn user scheduling preferences  Inconsistent user preferences, over-constrained schedules, and necessity of exploring the preference space result in user confusion about why a schedule is being presented.  Lack of user understanding of PLIANT’s updates creates confusion, mistrust, and the appearance that preferences are being ignored.  Provide justifications of schedule suggestions, without requiring user to understand SVM learning.

29 Future Work  Using conflicts to drive the learn- explain cycle  Using explanations to identify high- reward learning opportunities  Expand prototype links to CALO learning systems  Support more advanced dialogues and interfaces  User study using ICEE explanations

30 Resources  Explanation questions & strategies:  McGuinness, D.L., Glass, A., Wolverton, M., and Pinheiro da Silva, P. A Categorization of Explanation Questions for Task Processing Systems. AAAI 2007 Workshop on Explanation-Aware Computing (ExaCt- 2007), Vancouver, Canada, 2007.  CALO trust study:  Glass, A., McGuinness, D.L., and Wolverton, M. Establishing Trust in Adaptive Agents: A Study of Explanation and Trust in Agents that Learn. Technical Report, KSL-07-04, Knowledge Systems, Artificial Intelligence Laboratory, Stanford University, 2007 (to appear).  Explanation interfaces:  McGuinness, D.L., Ding, L., Glass, A., Chang, C., Zeng, H., and Furtado, V. Explanation Interfaces for the Semantic Web: Issues and Models. 3rd International Semantic Web User Interaction Workshop (SWUI’06), Athens, Georgia, 2006.  Overview of ICEE:  McGuinness, D.L., Glass, A., Wolverton, M., and Pinheiro da Silva, P. Explaining Task Processing in Cognitive Assistants that Learn. Proceedings of the 20th International FLAIRS Conference (FLAIRS-20), Key West, Florida, 2007.  Video demonstration of ICEE:  http://iw.stanford.edu/2006/10/ICEE.640.mov

31 Thank you! (really)


Download ppt "Explanation and Trust for Adaptive Agents Alyssa Glass Joint work with Deborah McGuinness (Stanford), Michael Wolverton (SRI), and."

Similar presentations


Ads by Google