Explaining Preference Learning Alyssa Glass CS229 Final Project Computer Science Department, Stanford University  Augment PLIANT to gather additional.

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Advertisements

The Cost of Authoring with a Knowledge Layer Judy Kay and Lichao Li School of Information Technologies The University of Sydney, Australia.
Evaluating the Robustness of Learning from Implicit Feedback Filip Radlinski Thorsten Joachims Presentation by Dinesh Bhirud
Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
The 20th International Conference on Software Engineering and Knowledge Engineering (SEKE2008) Department of Electrical and Computer Engineering
Suleyman Cetintas 1, Monica Rogati 2, Luo Si 1, Yi Fang 1 Identifying Similar People in Professional Social Networks with Discriminative Probabilistic.
Optimizing search engines using clickthrough data
Visual Data Mining: Concepts, Frameworks and Algorithm Development Student: Fasheng Qiu Instructor: Dr. Yingshu Li.
Towel: Towards an Intelligent ToDo List Ken Conley Jim Carpenter SRI International AAAI Spring Symposium 2007.
Query Chains: Learning to Rank from Implicit Feedback Paper Authors: Filip Radlinski Thorsten Joachims Presented By: Steven Carr.
Pattern Recognition and Machine Learning
Collaborative Filtering Sue Yeon Syn September 21, 2005.
Combining Inductive and Analytical Learning Ch 12. in Machine Learning Tom M. Mitchell 고려대학교 자연어처리 연구실 한 경 수
Optimal Design Laboratory | University of Michigan, Ann Arbor 2011 Design Preference Elicitation Using Efficient Global Optimization Yi Ren Panos Y. Papalambros.
Morris LeBlanc.  Why Image Retrieval is Hard?  Problems with Image Retrieval  Support Vector Machines  Active Learning  Image Processing ◦ Texture.
McGuinness – Microsoft eScience – December 8, Semantically-Enabled Science Informatics: With Supporting Knowledge Provenance and Evolution Infrastructure.
Speaker Adaptation for Vowel Classification
1 Integrating User Feedback Log into Relevance Feedback by Coupled SVM for Content-Based Image Retrieval 9-April, 2005 Steven C. H. Hoi *, Michael R. Lyu.
Bridging the Motivation Gap for Individual Annotators: What Can We Learn From Photo Annotation Systems? Tabin Hasan Dept. of Computer Science University.
Mapping Between Taxonomies Elena Eneva 11 Dec 2001 Advanced IR Seminar.
Case-based Reasoning System (CBR)
CONTENT-BASED BOOK RECOMMENDING USING LEARNING FOR TEXT CATEGORIZATION TRIVIKRAM BHAT UNIVERSITY OF TEXAS AT ARLINGTON DATA MINING CSE6362 BASED ON PAPER.
Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.
In Situ Evaluation of Entity Ranking and Opinion Summarization using Kavita Ganesan & ChengXiang Zhai University of Urbana Champaign
Web Explanations for Semantic Heterogeneity Discovery Pavel Shvaiko 2 nd European Semantic Web Conference (ESWC), 1 June 2005, Crete, Greece work in collaboration.
March 26, 2007 McGuinness et al Explaining Task Processing in Cognitive Assistants that Learn Deborah McGuinness 1, Alyssa Glass 1,2, Michael Wolverton.
Provenance-Aware Faceted Search Deborah L. McGuinness 1,2 Peter Fox 1 Cynthia Chang 1 Li Ding 1.
Copyright R. Weber Machine Learning, Data Mining ISYS370 Dr. R. Weber.
Mehdi Ghayoumi Kent State University Computer Science Department Summer 2015 Exposition on Cyber Infrastructure and Big Data.
SVM by Sequential Minimal Optimization (SMO)
Automatically Identifying Localizable Queries Center for E-Business Technology Seoul National University Seoul, Korea Nam, Kwang-hyun Intelligent Database.
CS 8751 ML & KDDSupport Vector Machines1 Support Vector Machines (SVMs) Learning mechanism based on linear programming Chooses a separating plane based.
Bringing Order to the Web: Automatically Categorizing Search Results Hao Chen, CS Division, UC Berkeley Susan Dumais, Microsoft Research ACM:CHI April.
Xiaoying Gao Computer Science Victoria University of Wellington Intelligent Agents COMP 423.
Efficient Interaction Strategies for Adaptive Reminding Julie S. Weber & Martha E. Pollack Adaptive Reminder Generation SignalingIntended Approach Learning.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Entropy-Driven Online Active Learning for Interactive Calendar Management Julie S. Weber Martha E. Pollack.
Giorgos Giannopoulos (IMIS/”Athena” R.C and NTU Athens, Greece) Theodore Dalamagas (IMIS/”Athena” R.C., Greece) Timos Sellis (IMIS/”Athena” R.C and NTU.
Contextual Ranking of Keywords Using Click Data Utku Irmak, Vadim von Brzeski, Reiner Kraft Yahoo! Inc ICDE 09’ Datamining session Summarized.
IR Theory: Relevance Feedback. Relevance Feedback: Example  Initial Results Search Engine2.
Using and modifying plan constraints in Constable Jim Blythe and Yolanda Gil Temple project USC Information Sciences Institute
Greedy is not Enough: An Efficient Batch Mode Active Learning Algorithm Chen, Yi-wen( 陳憶文 ) Graduate Institute of Computer Science & Information Engineering.
Artificial Intelligence Research Laboratory Bioinformatics and Computational Biology Program Computational Intelligence, Learning, and Discovery Program.
1 Semantic Provenance and Integration Peter Fox and Deborah L. McGuinness Joint work with Stephan Zednick, Patrick West, Li Ding, Cynthia Chang, … Tetherless.
15 August, 2005IEEE IRI Web Based Expert System for Class Schedule Planning using JESS Ken Ho Hewlett Packard Company Meiliu Lu Department of Computer.
Explanation and Trust for Adaptive Agents Alyssa Glass Joint work with Deborah McGuinness (Stanford), Michael Wolverton (SRI), and.
Mining Binary Constraints in Feature Models: A Classification-based Approach Yi Li.
Personalized Interaction With Semantic Information Portals Eric Schwarzkopf DFKI
Chapter 11 Statistical Techniques. Data Warehouse and Data Mining Chapter 11 2 Chapter Objectives  Understand when linear regression is an appropriate.
Practical Goal-based Reasoning in Ontology-Driven Applications Huy Pham & Deborah Stacey School of Computer Science University of Guelph Guelph, Ontario,
Research © 2008 Yahoo! Generating Succinct Titles for Web URLs Kunal Punera joint work with Deepayan Chakrabarti and Ravi Kumar Yahoo! Research.
Learning from Positive and Unlabeled Examples Investigator: Bing Liu, Computer Science Prime Grant Support: National Science Foundation Problem Statement.
1 Knowledge Acquisition and Learning by Experience – The Role of Case-Specific Knowledge Knowledge modeling and acquisition Learning by experience Framework.
Consensus Group Stable Feature Selection
Explainable Adaptive Assistants Deborah L. McGuinness, Tetherless World Constellation, RPI Alyssa Glass, Stanford University Michael Wolverton, SRI International.
Explainable Adaptive Assistants Deborah L. McGuinness, Tetherless World Constellation, RPI Alyssa Glass, Stanford University Michael Wolverton, SRI International.
NTU & MSRA Ming-Feng Tsai
Personalization Services in CADAL Zhang yin Zhuang Yuting Wu Jiangqin College of Computer Science, Zhejiang University November 19,2006.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Class Diagrams. Terms and Concepts A class diagram is a diagram that shows a set of classes, interfaces, and collaborations and their relationships.
A Framework to Predict the Quality of Answers with Non-Textual Features Jiwoon Jeon, W. Bruce Croft(University of Massachusetts-Amherst) Joon Ho Lee (Soongsil.
Artificial Intelligence
Brief Intro to Machine Learning CS539
Eick: Introduction Machine Learning
Architecture Components
Artificial Intelligence and Lisp Lecture 13 Additional Topics in Artificial Intelligence LiU Course TDDC65 Autumn Semester,
CSc4730/6730 Scientific Visualization
Toward a Reliable Evaluation of Mixed-Initiative Systems
Christoph F. Eick: A Gentle Introduction to Machine Learning
Semantic Web Towards a Web of Knowledge - Outline
Presentation transcript:

Explaining Preference Learning Alyssa Glass CS229 Final Project Computer Science Department, Stanford University  Augment PLIANT to gather additional meta-information about the SVM itself:  Support vectors identified by SVM  Support vectors nearest to the query point  Margin to the query point  Average margin over all data points  Non-support vectors nearest to the query point  Kernel transform used, if any  Represent SVM learning and meta-information as justification in Proof Markup Language (PML), adding SVM rules as needed.  Design abstraction strategies for presenting justification to user as a similarity-based explanation. (Work on PML representation and abstraction strategies is on-going; details will be in final report.) Active Preference Learning in PLIANT (Yorke-Smith et al. 2007) Motivation Studies of users interacting with systems that learn preferences show that, when the system behaves incorrectly, users quickly lose patience and trust in the system. Even when the system is correct, users view such outcomes as “magical” in some way, but are unable to understand why a particular suggestion is correct, or whether the system is likely to be helpful in the future. We describe the augmentation of a preference learner to provide meaningful feedback to the user through explanations. This work extends the PLIANT (Preference Learning through Interactive Advisable Nonintrusive Training) SVM-based preference learner, part of the PTIME personalized scheduling assistant in the CALO project. Acknowledgements We thank Melinda Gervasio, Pauline Berry, Neil Yorke-Smith, and Bart Peintner for access to the PLIANT and PTIME systems, the above architecture picture, and for helpful collaborations, partnerships, and feedback on this work. We also thank Deborah McGuinness, Michael Wolverton, and Paulo Pinheiro da Silva for the IW and PML systems, and for related discussions and previous work that helped to lay the foundation for this effort. We thank Mark Gondek for access to the CALO CLP data, and Karen Myers for related discussions, support, and ideas. Usability and Active Learning Providing Transparency into Preference Learning Select References  PLIANT: Yorke-Smith, N., Peintner, B., Gervasio, M., and Berry, P. M. Balancing the Needs of Personalization and Reasoning in a User- Centric Scheduling Assistant. Conference on Intelligent User Interfaces 2007 (IUI-07) (to appear).  PTIME: Berry, P., Gervasio, M., Uribe, T., Pollack, M., and Moffitt, M. A Personalized Time Management Assistant. AAAI Spring Symposium Series, Stanford, CA, March  Partial preference updates: Joachims, T. Optimizing Search Engines using Clickthrough Data. Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD), ACM,  User study on explaining statistical machine learning methods: Stumpf, S., Rajaram, V., Li, L., Burnett, M., Dietterich, T., Sullivan, E., Drummond, R., and Herlocker, J. Towards Harnessing User Feedback for Machine Learning. Conference on Intelligent User Interfaces 2007 (IUI-07) (to appear). System Workflow 1. Elicit initial preferences from user ( A vector from above) 2. User specifies new meeting parameters 3. Constraint solver generates candidate schedules ( Z’s ) 4. Candidate schedules ranked using evaluation function, F`(Z) 5. Candidate schedules presented to user in (roughly) the calculated preference order, with explanations for each one 6. User can ask questions, then chooses a schedule ( Z ) 7. Preferences ( a i and a ij weights) are updated based on choice Features: 1. Scheduling windows for requested meeting 2. Duration of meeting 3. Overlaps and conflicts 4. Location of meeting 5. Participants in meeting 6. Preferences of other meeting participants Model of preferences: aggregation function, a 2-order Choquet integral over partial utility functions based on the above features  learning 21 coefficient weights: F(z 1, …, z n ) =  i a i z i +  i,j a ij (z i  z j ) where each z i = u i (x i ), the utility for criterion i based on value x i Evaluation function: combine learned weights with initial elicited preferences: F`(Z) =   A  Z + (1-  )  W  Z Each schedule chosen by the user provides information about a partial preference ordering, as in (Joachims 2002). Architecture Several user studies show that transparency is key to trusting learning systems:  Our trust study  Lack of understanding of update gives appearance that preferences are ignored  seems untrustworthy  Typical user reaction: “I trust [the system’s] accuracy, but not its judgment.”  PTIME user study (Yorke-Smith et al. 2007) “The preference model must be explainable to the user … in terms of familiar, domain-relevant concepts.”  Explaining statistical ML methods (Stumpf et al. 2007)  Looked at explaining naïve Bayes learner and rule- learning system (classification), not SVMs  Rule-based explanations most easily understood, but similarity-based explanations found to be more natural and easily trusted Our approach: extend similarity-based explanations to SVM learning explanation SVM meta- information selected schedule solution set Calendar Manager Constraint Reasoner preference profile PLIANT scheduling request presentation set SVM Explainer current profile 4 5