1 Preference Reasoning in Logic Programming José Júlio Alferes Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa,

Slides:



Advertisements
Similar presentations
Configuration management
Advertisements

ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
ARSPA04Sadri, Toni1 A Logic-Based Approach to Reasoning with Beliefs about Trust ARSPA 2004 Fariba Sadri 1 and Francesca Toni 1,2 1: Department of Computing,
Updates plus Preferences Luís Moniz Pereira José Júlio Alferes Centro de Inteligência Artificial Universidade Nova de Lisboa Portugal JELIA’00, Málaga,
Justification-based TMSs (JTMS) JTMS utilizes 3 types of nodes, where each node is associated with an assertion: 1.Premises. Their justifications (provided.
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
Use Case & Use Case Diagram
Answer Set Programming Overview Dr. Rogelio Dávila Pérez Profesor-Investigador División de Posgrado Universidad Autónoma de Guadalajara
Well-founded Semantics with Disjunction João Alcântara, Carlos Damásio and Luís Moniz Pereira Centro de Inteligência Artificial.
The International RuleML Symposium on Rule Interchange and Applications Local and Distributed Defeasible Reasoning in Multi-Context Systems Antonis Bikakis,
Intelligent systems Lecture 6 Rules, Semantic nets.
An Approach to Evaluate Data Trustworthiness Based on Data Provenance Department of Computer Science Purdue University.
Default and Cooperative Reasoning in Multi-Agent Systems Chiaki Sakama Wakayama University, Japan Programming Multi-Agent Systems based on Logic Dagstuhl.
Luís Moniz Pereira CENTRIA, Departamento de Informática Universidade Nova de Lisboa Pierangelo Dell’Acqua Dept. of Science and Technology.
Modelling Hybrid Control Systems with Behaviour Networks Pierangelo Dell’Acqua Anna Lombardi Dept. of Science and Technology - ITN Linköping University,
Auto-Epistemic Logic Proposed by Moore (1985) Contemplates reflection on self knowledge (auto-epistemic) Allows for representing knowledge not just about.
Models -1 Scientists often describe what they do as constructing models. Understanding scientific reasoning requires understanding something about models.
Luís Moniz Pereira CENTRIA, Departamento de Informática Universidade Nova de Lisboa Pierangelo Dell’Acqua Dept. of Science and Technology.
José Júlio Alferes Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal Pierangelo Dell’Acqua Dept. of.
Luís Moniz Pereira CENTRIA, Departamento de Informática Universidade Nova de Lisboa Pierangelo Dell’Acqua Dept. of Science and Technology.
A Logic-Based Approach to Model Supervisory Control Systems Pierangelo Dell’Acqua Anna Lombardi Dept. of Science and Technology - ITN Linköping University,
Ant Colonies As Logistic Processes Optimizers
Introduction to CSE 591: Autonomous agents - theory and practice. Chitta Baral Professor Department of Computer Sc. & Engg. Arizona State University.
1 L. M. Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal P. Dell’Acqua, M. Engberg Dept. of Science and Technology.
Case-based Reasoning System (CBR)
Knowledge Acquisition CIS 479/579 Bruce R. Maxim UM-Dearborn.
Reductio ad Absurdum Argumentation in Normal Logic Programs Luís Moniz Pereira and Alexandre Miguel Pinto CENTRIA – Centro de Inteligência Artificial,
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa Pierangelo Dell’Acqua Dept. of Science and.
Combining Societal Agents’ Knowledge João Alexandre Leite José Júlio Alferes Luís Moniz Pereira CENTRIA – Universidade Nova de Lisboa Universidade de Évora,
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal Pierangelo Dell’Acqua Dept. of Science and Technology.
Towards a Model of Evolving Agents for Ambient Intelligence Pierangelo Dell’Acqua Dept. of Science and Technology Linköping University, Sweden ASAmI’07.
João Alexandre Leite Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa { jleite, lmp Pierangelo.
Describing Syntax and Semantics
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal Pierangelo Dell’Acqua Dept. of Science and Technology.
ASP vs. Prolog like programming ASP is adequate for: –NP-complete problems –situation where the whole program is relevant for the problem at hands èIf.
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal Pierangelo Dell’Acqua Aida Vitória Dept. of Science.
Belief Revision Lecture 1: AGM April 1, 2004 Gregory Wheeler
Configuration Management
Cooperative Query Answering Based on a talk by Erick Martinez.
Prospective Logic Agents Luís Moniz Pereira Gonçalo Lopes.
1. Motivation Knowledge in the Semantic Web must be shared and modularly organised. The semantics of the modular ERDF framework has been defined model.
An Architecture for Empathic Agents. Abstract Architecture Planning + Coping Deliberated Actions Agent in the World Body Speech Facial expressions Effectors.
Preferential Theory Revision Pierangelo Dell’Acqua Dept. of Science and Technology - ITN Linköping University, Sweden Luís Moniz Pereira Centro de Inteligência.
Features, Policies and Their Interactions Joanne M. Atlee Department of Computer Science University of Waterloo.
Belief Desire Intention Agents Presented by Justin Blount From Reasoning about Rational Agents By Michael Wooldridge.
Ad Hoc Constraints Objectives of the Lecture : To consider Ad Hoc Constraints in principle; To consider Ad Hoc Constraints in SQL; To consider other aspects.
Combining Answer Sets of Nonmonotonic Logic Programs Chiaki Sakama Wakayama University Katsumi Inoue National Institute of Informatics.
A Language for Updates with Multiple Dimensions João Alexandre Leite 1 José Júlio Alferes 1 Luís Moniz Pereira 1 Halina Przymusinska 2 Teodor Przymusinski.
 Architecture and Description Of Module Architecture and Description Of Module  KNOWLEDGE BASE KNOWLEDGE BASE  PRODUCTION RULES PRODUCTION RULES 
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
Pre and Post Preferences over Abductive Models Luís Moniz Pereira 1 Gonçalo Lopes 1 Pierangelo Dell'Acqua 2 Pierangelo Dell'Acqua 2 1 CENTRIA – Universidade.
Modelling Adaptive Controllers with Evolving Logic Programs Pierangelo Dell’Acqua Anna Lombardi Dept. of Science and Technology - ITN Linköping University,
A Flexible Access Control Model for Web Services Elisa Bertino CERIAS and CS Department, Purdue University Joint work with Anna C. Squicciarini – University.
ISBN Chapter 3 Describing Semantics -Attribute Grammars -Dynamic Semantics.
Slide 1 Propositional Definite Clause Logic: Syntax, Semantics and Bottom-up Proofs Jim Little UBC CS 322 – CSP October 20, 2014.
Actions Planning and Defeasible Reasoning Guillermo R. Simari Alejandro J. García Marcela Capobianco Dept. of Computer Science and Engineering U NIVERSIDAD.
EIS'2007 (Salamanca, Spain, March 22-24, 2007) 1 Towards an Extended Model of User Interface Adaptation: the ISATINE framework 1 Víctor M. López Jaquero,
Preference Revision via Declarative Debugging Pierangelo Dell’Acqua Dept. of Science and Technology - ITN Linköping University, Sweden EPIA’05, Covilhã,
Yarmouk University Department of Computer Information Systems CIS 499 Yarmouk University Department of Computer Information Systems CIS 499 Yarmouk University.
EVOlving L ogic P rograms J.J. Alferes, J.A. Leite, L.M. Pereira (UNL) A. Brogi (U.Pisa)
L. M. Pereira, J. J. Alferes, J. A. Leite Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal P. Dell’Acqua Dept. of Science.
KR A Principled Framework for Modular Web Rule Bases and its Semantics Anastasia Analyti Institute of Computer Science, FORTH-ICS, Greece Grigoris.
Distributed Models for Decision Support Jose Cuena & Sascha Ossowski Pesented by: Gal Moshitch & Rica Gonen.
1 Reasoning with Infinite stable models Piero A. Bonatti presented by Axel Polleres (IJCAI 2001,
Abduction CIS308 Dr Harry Erwin. Contents Definition of abduction An abductive learning method Recommended reading.
Artificial Intelligence
An argument-based framework to model an agent's beliefs in a dynamic environment Marcela Capobianco Carlos I. Chesñevar Guillermo R. Simari Dept. of Computer.
4 - Conditional Control Structures CHAPTER 4. Introduction A Program is usually not limited to a linear sequence of instructions. In real life, a programme.
October 19th, 2007L. M. Pereira and A. M. Pinto1 Approved Models for Normal Logic Programs Luís Moniz Pereira and Alexandre Miguel Pinto Centre for Artificial.
Representations & Reasoning Systems (RRS) (2.2)
Presentation transcript:

1 Preference Reasoning in Logic Programming José Júlio Alferes Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal {jja, Pierangelo Dell’Acqua Aida Vitória Dept. of Science and Technology - ITN Linköping University, Sweden {pier,

2 Outline 1.Combining updates and preferences 2.User preference information in query answering 3.Preferring alternative explanations 4.Preferring and updating in multi-agent systems

3 References [ JELIA00 ] J. J. Alferes and L. M. Pereira Updates plus Preferences Proc. 7 th European Conf. on Logics in Artificial Intelligence (JELIA00), LNAI 1919, 2000 [ INAP01 ] P. Dell'Acqua and L. M. Pereira Preferring and Updating in Logic-Based Agents Selected Papers from the 14th Int. Conf. on Applications of Prolog (INAP01), LNAI 2543, 2003 [ JELIA02 ] J. J. Alferes, P. Dell'Acqua and L. M. Pereira A Compilation of Updates plus Preferences Proc. 8 th European Conf. on Logics in Artificial Intelligence (JELIA02), LNAI 2424, 2002 [ FQAS02 ] P. Dell'Acqua, L. M. Pereira and A. Vitoria User Preference Information in Query Answering 5 th Int. Conf. on Flexible Query Answering Systems (FQAS02), LNAI 2522, 2002

4 1. Update reasoning Updates model dynamically evolving worlds: knowledge, whether complete or incomplete, can be updated to reflect world change. new knowledge may contradict and override older one. updates differ from revisions which are about an incomplete static world model.

5 Preference reasoning Preferences are employed with incomplete knowledge when several models are possible: preferences act by choosing some of the possible models. this is achieved via a partial order among rules. Rules will only fire if they are not defeated by more preferred rules. our preference approach is based on the approach of [KR98]. [ KR98 ] G. Brewka and T. Eiter Preferred answer sets for extended logic programs KR’98, 1998

6 Preference and updates combined Despite their differences preferences and updates display similarities. Both can be seen as wiping out rules: in preferences the less preferred rules, so as to remove models which are undesired. in updates the older rules, inclusively for obtaining models in otherwise inconsistent theories. This view helps put them together into a single uniform framework. In this framework, preferences can be updated.

7 LP framework Atomic formulae: A objective atom not A default atom Formulae: every L i is an objective or default atom generalized rule L 0  L 1  L n

8 Let N be a set of constants containing a unique name for each generalized rule. Def. Prioritized logic program Let P be a set of generalized rules and R a set of priority rules. Then  =(P, R) is a prioritized logic program. Z is a literal r 1 < r 2 or not r 1 < r 2 priority rule Z  L 1  L n r 1 < r 2 means that rule r 1 is preferred to rule r 2

9 Dynamic prioritized programs Let S={1,…,s,…} be a set of states (natural numbers). Def. Dynamic prioritized program Let (P i, R i ) be a prioritized logic program for every i  S, then  =  {(P i, R i ) : i  S} is a dynamic prioritized program. The meaning of such a sequence results from updating (P 1, R 1 ) with the rules from (P 2, R 2 ), and then updating the result with … the rules from (P n, R n ).

10 Example Suppose a scenario where Stefano watches programs on football, tennis, or the news. (1) In the initial situation, being a typical italian, Stefano prefers both football and tennis to the news and, in case of international competitions, he prefers tennis over football. In this situation, Stefano has two alternative TV programmes equally preferable: football and tennis. f  not t, not n (r 1 ) t  not f, not n (r 2 ) n  not f, not t (r 3 ) r 1 < r 3 r 2 < r 3 r 2 < r 1  us x<y  x<z, z<y P1 R1

11 (2) Next, suppose that a US-open tennis competition takes place: Now, Stefano's favourite programme is tennis. us  (r 4 ) P2R2 (3) Finally, suppose that Stefano's preferences change and he becomes interested in international news. Then, in case of breaking news he will prefer news over both football and tennis. bn  (r 5 ) P3 not (r 1 < r 3 )  bn not (r 2 < r 3 )  bn r 3 < r 1  bn r 3 < r 2  bn R3

12 Preferred stable models Let P =  {(P i, R i ) : i  S} be a dynamic prioritized program, Q =  { P i  R i : i  S }, PR =  i (P i  R i ), and M an interpretation of P. Def. Default and Rejected rules Default(PR,M) = {not A :  (A  L 1,…,L n ) in PR and M ⊨ L 1,…,L n } Reject(s,M,Q) = { r  P i  R i :  r’  P j  R j, head(r) = not head(r’), i<j  s and M ⊨ body(r’) }

13 Def. Unsupported and Unprefered rules Unsup(PR,M) = {r  PR : M ⊨ head(r) and M ⊨ body - (r)} Unpref(PR,M) is the least set including Unsup(PR, M) and every rule r such that:  r’  (PR – Unpref(PR, M)) : M ⊨ r’ < r,  M ⊨ body + (r’) and [not head(r’)  body - (r) or  (not head(r)  body - (r’) and M ⊨ body(r))]

14 Def. Preferred stable models Let s be a state, P =  {(P i, R i ) : i  S} a dynamic prioritized program, and M a stable model of P. M is a preferred stable model of P at state s iff M = least( [X - Unpref(X, M)]  Default(PR, M) ) where:PR =  i ≤ s (P i  R i ) Q =  {P i  R i : i  S } X = PR - Reject(s,M,Q)

15  (s,P) transformation Let s be a state and P =  {(P i, R i ) : i  S} a dynamic prioritized program. In [Jelia02] we gave a transformation  (s,P) that compiles dynamic prioritized programs into normal logic programs. The preference part of our transformation is modular or incremental wrt. the update part of the transformation. The size of the transformed program  (s,P) in the worst case is quadratic in the size of the original dynamic prioritized program P. An implementation of the transformation is available at:

16 Thm. Correctness of  (s,P) An interpretation M a stable model of  (s,P) iff M, restricted to the language of P, is a preferred stable model of P at state s.

17 2. User preference information in query answering Query answering systems are often difficult to use because they do not attempt to cooperate with their users. The use of additional information about the user can enhance cooperative behaviour of query answering systems [FQAS02].

18 Consider a system whose knowledge is formalized by a prioritized logic program:  = (P, R) Extra level of flexibility - if the user can provide preference information at query time: ?- (G, Pref ) Given  =(P,R), the system has to derive G from P by taking into account the preferences in R which are updated by the preferences in Pref. Finally, it is desirable to make the background knowledge (P,R) of the system updatable in a way that it can be modified to reflect changes in the world (including preferences).

19 The ability to take into account the user information makes the system able to target its answers to the user’s goal and interests. Def. Queries with preferences Let G be a goal,  a prioritized logic program and  =  {(P i, R i ) : i  S} a dynamic prioritized program. Then ?- (G,  ) is a query wrt. 

20 S + = S  { max(S) + 1 } Def. Joinability at state s Let s  S + be a state,  =  {(P i, R i ) : i  S} a dynamic prioritized program and  =(P X, R X ) a prioritized logic program. The joinability function  s at state s is:   s  =  {(P i, R i ) : i  S + } (P i, R i )if 1  i < s (P i, R i ) =(P X, R X )if i = s (P i-1, R i-1 )if s < i  max(S + ) Joinability function

21 Preferred conclusions Def. Preferred conclusions Let s  S + be a state and  =  {(P i, R i ) : i  S} a dynamic prioritized program. The preferred conclusions of  with joinability function  s are: (G,  ) : G is included in every preferred stable model of   s  at state max(S + )

22 Example: car dealer Consider the following program that exemplifies the process of quoting prices for second-hand cars. price(Car,200)  stock(Car,Col,T), not price(Car,250), not offer (r 1 ) price(Car,250)  stock(Car,Col,T), not price(Car,200), not offer (r 2 ) prefer(orange)  not prefer(black) (r 3 ) prefer(black)  not prefer(orange) (r 4 ) stock(Car,Col,T)  bought(Car,Col,Date), T=today-Date (r 5 )

23 When the company sells a car, the company must remove the car from the stock: not bought(volvo,black,d2) When the company buys a car, the information about the car must be added to the stock via an update: bought(fiat,orange,d1)

24 The selling strategy of the company can be formalized as: price(Car,200)  stock(Car,Col,T), not price(Car,250), not offer (r 1 ) price(Car,250)  stock(Car,Col,T), not price(Car,200), not offer (r 2 ) prefer(orange)  not prefer(black) (r 3 ) prefer(black)  not prefer(orange) (r 4 ) stock(Car,Col,T)  bought(Car,Col,Date), T=today-Date (r 5 ) r 2 < r 1  stock(Car,Col,T), T < 10 r 1 < r 2  stock(Car,Col,T), T  10, not prefer(Col) r 2 < r 1  stock(Car,Col,T), T  10, prefer(Col) r 4 < r 3

25 Suppose that the company adopts the policy to offer a special price for cars at a certain times of the year. price(Car,100)  stock(Car,Col,T), offer (r 6 ) not offer Suppose an orange fiat bought in date d1 is in stock and offer does not hold. Independently of the joinability function used: ?- ( price(fiat,P), ({},{}) ) P = 250 if today-d1 < 10 P = 200 if today-d1  10

26 ?- ( price(fiat,P), ({},{not (r 4 < r 3 ), r 3 < r 4 }) ) P = 250 For this query it is relevant which joinability function is used: if we use  1, then we do not get the intended answer since the user preferences are overwritten by the default preferences of the company; on the other hand, it is not so appropriate to use  max(S + ) since a customer could ask: ?- ( price(fiat,P), ({offer},{}) )

27 Selecting a joinability function In some applications the user preferences in  must have priority over the preferences in . In this case, the joinability function  max(S+) must be used. Example: a web-site application of a travel agency whose database  maintains information about holiday resorts and preferences among touristy locations. When a user asks a query ?- (G,  ), the system must give priority to . Some other applications need the joinability function  1 to give priority to the preferences in .

28 Open issues Detect inconsistent preference specifications. How to incorporate abduction in our framework: abductive preferences leading to conditional answers depending on accepting a preference. How to tackle the problem arising when several users query the system together.

29 3. Preferring abducibles The evaluation of alternative explanations is one of the central problems of abduction. An abductive problem of a reasonable size may have a combinatorial explosion of possible explanations to handle. It is important to generate only the explanations that are relevant. Some proposals involve a global criterion against which each explanation as a whole can be evaluated. A general drawback of those approaches is that global criteria are generally domain independent and computationally expensive. An alternative to global criteria is to allow the theory to contain rules encoding domain specific information about the likelihood that a particular assumption be true.

30 In our approach we can express preferences among abducibles to discard the unwanted assumptions. Preferences over alternative abducibles can be coded into cycles over negation, and preferring a rule will break the cycle in favour of one abducible or another.

31 Example Consider a situation where an agent Peter drinks either tea or coffee (but not both). Suppose that Peter prefers coffee to tea when sleepy. This situation can be represented by a set Q of generalized rules with set of abducibles A Q ={tea, coffee}. drink  tea Q =drink  coffee coffee C tea  sleepy a C b means that abducible a is preferred to abducible b

32 In our framework, Q can be coded into the following set P of generalized rules with set of abducibles A P = {abduce}. drink  tea drink  coffee coffee  abduce, not tea, confirm(coffee) (r 1 ) tea  abduce, not coffee, confirm(tea) (r 2 ) P =confirm(tea)  expect(tea), not expect_not(tea) confirm(coffee)  expect(coffee), not expect_not(coffee) expect(tea) expect(coffee) r 1 < r 2  sleepy, confirm(coffee)

33 Having the notion of expectation allows one to express the preconditions for an expectation: expect(tea)  have_tea expect(coffee)  have_coffee By means of expect_not one can express situations where he does not expect something: expect_not(coffee)  blood_pressure_high

34 4. Preferring and updating in multi-agents In [ INAP01 ] we proposed a logic-based approach to agents that can: Reason and react to other agents Prefer among possible choices Intend to reason and to act Update their own knowledge, reactions and goals Interact by updating the theory of another agent Decide whether to accept an update depending on the requesting agent

35 Updating agents Updating agent: a rational, reactive agent that can dynamically change its own knowledge and goals: makes observations reciprocally updates other agents with goals and rules thinks a bit (rational) selects and executes an action (reactive)

36 Preferring agents Preferring agent: an agent that is able to prefer beliefs and reactions when several alternatives are possible. Agents can express preferences about their own rules. Preferences are expressed via priority rules. Preferences can be updated, possibly on advice from others.

37 Agent’s language Atomic formulae: Aobjective atoms not Adefault atoms  :Cprojects  ÷Cupdates Formulae: L i is an atom, an update or a negated update active rule generalized/priority rules Z j is a project integrity constraint false  L 1  L n  Z 1  Z m A  L 1  L n not A  L 1  L n L 1  L n  Z

38 Agent’s knowledge states Knowledge states represent dynamically evolving states of agents’ knowledge. They undergo change due to updates. Given the current knowledge state P s, its successor knowledge state P s+1 is produced as a result of the occurrence of a set of parallel updates. Update actions do not modify the current or any of the previous knowledge states. They only affect the successor state: the precondition of the action is evaluated in the current state and the postcondition updates the successor state.

39 Projects and updates A project j:C denotes the intention of some agent i of proposing the updating the theory of agent j with C. i÷C denotes an update proposed by i of the current theory of some agent j with C. wilma:C Fred ÷ C

40 Representation of conflicting information and preferences Preferences may resolve conflicting information. This example models a situation where an agent, Fabio, receives conflicting advice from two reliable authorities. Let (P,R) be the initial theory of Fabio, where R={} and dont(A)  fa(noA), not do(A)(r 1 ) do(A)  ma(A), not dont(A)(r 2 ) false  do(A), fa(noA) false  dont(A), ma(A) r 1 < r 2  fr r 2 < r 1  mr P = fa=father advises ma=mother advises fr=father responsibility mr=mother responsibility

41 Suppose that Fabio wants to live alone, represented as LA. His mother advises him to do so, but the father advises not to do so: U 1 = { mother ÷ ma(LA), father ÷ fa(noLA) } Fabio accepts both updates, and therefore he is still unable to choose either do(LA) or dont(LA) and, as a result, does not perform any action whatsoever.

42 Afterwards, Fabio's parents separate and the judge assigns responsibility over Fabio to the mother: U 2 = { judge ÷ mr } Now the situation changes since the second priority rule gives preference to the mother's wishes, and therefore Fabio can happily conclude ”do live alone”.

43 Updating preferences Within the theory of an agent both rules and preferences can be updated. The updating process is triggered by means of external or internal projects. Here internal projects of an agent are used to update its own priority rules.

44 Let the theory of Stan be characterized by : workLate  not party(r 1 ) P =party  not workLate(r 2 ) money  workLate(r 3 ) r 2 < r 1 % partying is preferred to working until late beautifulWoman  stan: wishGoOut wishGoOut,, not money  stan: getMoney R =wishGoOut, money  beautifulWoman: inviteOut getMoney  stan: r 1 < r 2 getMoney  stan: not r 2 < r 1 % to get money, Stan must update his priority rules