Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Preference Reasoning in Logic Programming José Júlio Alferes Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa,

Similar presentations


Presentation on theme: "1 Preference Reasoning in Logic Programming José Júlio Alferes Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa,"— Presentation transcript:

1 1 Preference Reasoning in Logic Programming José Júlio Alferes Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal {jja, lmp}@di.fct.unl.pt Pierangelo Dell’Acqua Aida Vitória Dept. of Science and Technology - ITN Linköping University, Sweden {pier, aidvi}@itn.liu.se

2 2 Outline 1.Combining updates and preferences 2.User preference information in query answering 3.Preferring alternative explanations 4.Preferring and updating in multi-agent systems

3 3 References [ JELIA00 ] J. J. Alferes and L. M. Pereira Updates plus Preferences Proc. 7 th European Conf. on Logics in Artificial Intelligence (JELIA00), LNAI 1919, 2000 [ INAP01 ] P. Dell'Acqua and L. M. Pereira Preferring and Updating in Logic-Based Agents Selected Papers from the 14th Int. Conf. on Applications of Prolog (INAP01), LNAI 2543, 2003 [ JELIA02 ] J. J. Alferes, P. Dell'Acqua and L. M. Pereira A Compilation of Updates plus Preferences Proc. 8 th European Conf. on Logics in Artificial Intelligence (JELIA02), LNAI 2424, 2002 [ FQAS02 ] P. Dell'Acqua, L. M. Pereira and A. Vitoria User Preference Information in Query Answering 5 th Int. Conf. on Flexible Query Answering Systems (FQAS02), LNAI 2522, 2002

4 4 1. Update reasoning Updates model dynamically evolving worlds: knowledge, whether complete or incomplete, can be updated to reflect world change. new knowledge may contradict and override older one. updates differ from revisions which are about an incomplete static world model.

5 5 Preference reasoning Preferences are employed with incomplete knowledge when several models are possible: preferences act by choosing some of the possible models. this is achieved via a partial order among rules. Rules will only fire if they are not defeated by more preferred rules. our preference approach is based on the approach of [KR98]. [ KR98 ] G. Brewka and T. Eiter Preferred answer sets for extended logic programs KR’98, 1998

6 6 Preference and updates combined Despite their differences preferences and updates display similarities. Both can be seen as wiping out rules: in preferences the less preferred rules, so as to remove models which are undesired. in updates the older rules, inclusively for obtaining models in otherwise inconsistent theories. This view helps put them together into a single uniform framework. In this framework, preferences can be updated.

7 7 LP framework Atomic formulae: A objective atom not A default atom Formulae: every L i is an objective or default atom generalized rule L 0  L 1  L n

8 8 Let N be a set of constants containing a unique name for each generalized rule. Def. Prioritized logic program Let P be a set of generalized rules and R a set of priority rules. Then  =(P, R) is a prioritized logic program. Z is a literal r 1 < r 2 or not r 1 < r 2 priority rule Z  L 1  L n r 1 < r 2 means that rule r 1 is preferred to rule r 2

9 9 Dynamic prioritized programs Let S={1,…,s,…} be a set of states (natural numbers). Def. Dynamic prioritized program Let (P i, R i ) be a prioritized logic program for every i  S, then  =  {(P i, R i ) : i  S} is a dynamic prioritized program. The meaning of such a sequence results from updating (P 1, R 1 ) with the rules from (P 2, R 2 ), and then updating the result with … the rules from (P n, R n ).

10 10 Example Suppose a scenario where Stefano watches programs on football, tennis, or the news. (1) In the initial situation, being a typical italian, Stefano prefers both football and tennis to the news and, in case of international competitions, he prefers tennis over football. In this situation, Stefano has two alternative TV programmes equally preferable: football and tennis. f  not t, not n (r 1 ) t  not f, not n (r 2 ) n  not f, not t (r 3 ) r 1 < r 3 r 2 < r 3 r 2 < r 1  us x<y  x<z, z<y P1 R1

11 11 (2) Next, suppose that a US-open tennis competition takes place: Now, Stefano's favourite programme is tennis. us  (r 4 ) P2R2 (3) Finally, suppose that Stefano's preferences change and he becomes interested in international news. Then, in case of breaking news he will prefer news over both football and tennis. bn  (r 5 ) P3 not (r 1 < r 3 )  bn not (r 2 < r 3 )  bn r 3 < r 1  bn r 3 < r 2  bn R3

12 12 Preferred stable models Let P =  {(P i, R i ) : i  S} be a dynamic prioritized program, Q =  { P i  R i : i  S }, PR =  i (P i  R i ), and M an interpretation of P. Def. Default and Rejected rules Default(PR,M) = {not A :  (A  L 1,…,L n ) in PR and M ⊨ L 1,…,L n } Reject(s,M,Q) = { r  P i  R i :  r’  P j  R j, head(r) = not head(r’), i<j  s and M ⊨ body(r’) }

13 13 Def. Unsupported and Unprefered rules Unsup(PR,M) = {r  PR : M ⊨ head(r) and M ⊨ body - (r)} Unpref(PR,M) is the least set including Unsup(PR, M) and every rule r such that:  r’  (PR – Unpref(PR, M)) : M ⊨ r’ < r,  M ⊨ body + (r’) and [not head(r’)  body - (r) or  (not head(r)  body - (r’) and M ⊨ body(r))]

14 14 Def. Preferred stable models Let s be a state, P =  {(P i, R i ) : i  S} a dynamic prioritized program, and M a stable model of P. M is a preferred stable model of P at state s iff M = least( [X - Unpref(X, M)]  Default(PR, M) ) where:PR =  i ≤ s (P i  R i ) Q =  {P i  R i : i  S } X = PR - Reject(s,M,Q)

15 15  (s,P) transformation Let s be a state and P =  {(P i, R i ) : i  S} a dynamic prioritized program. In [Jelia02] we gave a transformation  (s,P) that compiles dynamic prioritized programs into normal logic programs. The preference part of our transformation is modular or incremental wrt. the update part of the transformation. The size of the transformed program  (s,P) in the worst case is quadratic in the size of the original dynamic prioritized program P. An implementation of the transformation is available at: http://centria.di.fct.unl.pt/~jja/updates

16 16 Thm. Correctness of  (s,P) An interpretation M a stable model of  (s,P) iff M, restricted to the language of P, is a preferred stable model of P at state s.

17 17 2. User preference information in query answering Query answering systems are often difficult to use because they do not attempt to cooperate with their users. The use of additional information about the user can enhance cooperative behaviour of query answering systems [FQAS02].

18 18 Consider a system whose knowledge is formalized by a prioritized logic program:  = (P, R) Extra level of flexibility - if the user can provide preference information at query time: ?- (G, Pref ) Given  =(P,R), the system has to derive G from P by taking into account the preferences in R which are updated by the preferences in Pref. Finally, it is desirable to make the background knowledge (P,R) of the system updatable in a way that it can be modified to reflect changes in the world (including preferences).

19 19 The ability to take into account the user information makes the system able to target its answers to the user’s goal and interests. Def. Queries with preferences Let G be a goal,  a prioritized logic program and  =  {(P i, R i ) : i  S} a dynamic prioritized program. Then ?- (G,  ) is a query wrt. 

20 20 S + = S  { max(S) + 1 } Def. Joinability at state s Let s  S + be a state,  =  {(P i, R i ) : i  S} a dynamic prioritized program and  =(P X, R X ) a prioritized logic program. The joinability function  s at state s is:   s  =  {(P i, R i ) : i  S + } (P i, R i )if 1  i < s (P i, R i ) =(P X, R X )if i = s (P i-1, R i-1 )if s < i  max(S + ) Joinability function

21 21 Preferred conclusions Def. Preferred conclusions Let s  S + be a state and  =  {(P i, R i ) : i  S} a dynamic prioritized program. The preferred conclusions of  with joinability function  s are: (G,  ) : G is included in every preferred stable model of   s  at state max(S + )

22 22 Example: car dealer Consider the following program that exemplifies the process of quoting prices for second-hand cars. price(Car,200)  stock(Car,Col,T), not price(Car,250), not offer (r 1 ) price(Car,250)  stock(Car,Col,T), not price(Car,200), not offer (r 2 ) prefer(orange)  not prefer(black) (r 3 ) prefer(black)  not prefer(orange) (r 4 ) stock(Car,Col,T)  bought(Car,Col,Date), T=today-Date (r 5 )

23 23 When the company sells a car, the company must remove the car from the stock: not bought(volvo,black,d2) When the company buys a car, the information about the car must be added to the stock via an update: bought(fiat,orange,d1)

24 24 The selling strategy of the company can be formalized as: price(Car,200)  stock(Car,Col,T), not price(Car,250), not offer (r 1 ) price(Car,250)  stock(Car,Col,T), not price(Car,200), not offer (r 2 ) prefer(orange)  not prefer(black) (r 3 ) prefer(black)  not prefer(orange) (r 4 ) stock(Car,Col,T)  bought(Car,Col,Date), T=today-Date (r 5 ) r 2 < r 1  stock(Car,Col,T), T < 10 r 1 < r 2  stock(Car,Col,T), T  10, not prefer(Col) r 2 < r 1  stock(Car,Col,T), T  10, prefer(Col) r 4 < r 3

25 25 Suppose that the company adopts the policy to offer a special price for cars at a certain times of the year. price(Car,100)  stock(Car,Col,T), offer (r 6 ) not offer Suppose an orange fiat bought in date d1 is in stock and offer does not hold. Independently of the joinability function used: ?- ( price(fiat,P), ({},{}) ) P = 250 if today-d1 < 10 P = 200 if today-d1  10

26 26 ?- ( price(fiat,P), ({},{not (r 4 < r 3 ), r 3 < r 4 }) ) P = 250 For this query it is relevant which joinability function is used: if we use  1, then we do not get the intended answer since the user preferences are overwritten by the default preferences of the company; on the other hand, it is not so appropriate to use  max(S + ) since a customer could ask: ?- ( price(fiat,P), ({offer},{}) )

27 27 Selecting a joinability function In some applications the user preferences in  must have priority over the preferences in . In this case, the joinability function  max(S+) must be used. Example: a web-site application of a travel agency whose database  maintains information about holiday resorts and preferences among touristy locations. When a user asks a query ?- (G,  ), the system must give priority to . Some other applications need the joinability function  1 to give priority to the preferences in .

28 28 Open issues Detect inconsistent preference specifications. How to incorporate abduction in our framework: abductive preferences leading to conditional answers depending on accepting a preference. How to tackle the problem arising when several users query the system together.

29 29 3. Preferring abducibles The evaluation of alternative explanations is one of the central problems of abduction. An abductive problem of a reasonable size may have a combinatorial explosion of possible explanations to handle. It is important to generate only the explanations that are relevant. Some proposals involve a global criterion against which each explanation as a whole can be evaluated. A general drawback of those approaches is that global criteria are generally domain independent and computationally expensive. An alternative to global criteria is to allow the theory to contain rules encoding domain specific information about the likelihood that a particular assumption be true.

30 30 In our approach we can express preferences among abducibles to discard the unwanted assumptions. Preferences over alternative abducibles can be coded into cycles over negation, and preferring a rule will break the cycle in favour of one abducible or another.

31 31 Example Consider a situation where an agent Peter drinks either tea or coffee (but not both). Suppose that Peter prefers coffee to tea when sleepy. This situation can be represented by a set Q of generalized rules with set of abducibles A Q ={tea, coffee}. drink  tea Q =drink  coffee coffee C tea  sleepy a C b means that abducible a is preferred to abducible b

32 32 In our framework, Q can be coded into the following set P of generalized rules with set of abducibles A P = {abduce}. drink  tea drink  coffee coffee  abduce, not tea, confirm(coffee) (r 1 ) tea  abduce, not coffee, confirm(tea) (r 2 ) P =confirm(tea)  expect(tea), not expect_not(tea) confirm(coffee)  expect(coffee), not expect_not(coffee) expect(tea) expect(coffee) r 1 < r 2  sleepy, confirm(coffee)

33 33 Having the notion of expectation allows one to express the preconditions for an expectation: expect(tea)  have_tea expect(coffee)  have_coffee By means of expect_not one can express situations where he does not expect something: expect_not(coffee)  blood_pressure_high

34 34 4. Preferring and updating in multi-agents In [ INAP01 ] we proposed a logic-based approach to agents that can: Reason and react to other agents Prefer among possible choices Intend to reason and to act Update their own knowledge, reactions and goals Interact by updating the theory of another agent Decide whether to accept an update depending on the requesting agent

35 35 Updating agents Updating agent: a rational, reactive agent that can dynamically change its own knowledge and goals: makes observations reciprocally updates other agents with goals and rules thinks a bit (rational) selects and executes an action (reactive)

36 36 Preferring agents Preferring agent: an agent that is able to prefer beliefs and reactions when several alternatives are possible. Agents can express preferences about their own rules. Preferences are expressed via priority rules. Preferences can be updated, possibly on advice from others.

37 37 Agent’s language Atomic formulae: Aobjective atoms not Adefault atoms  :Cprojects  ÷Cupdates Formulae: L i is an atom, an update or a negated update active rule generalized/priority rules Z j is a project integrity constraint false  L 1  L n  Z 1  Z m A  L 1  L n not A  L 1  L n L 1  L n  Z

38 38 Agent’s knowledge states Knowledge states represent dynamically evolving states of agents’ knowledge. They undergo change due to updates. Given the current knowledge state P s, its successor knowledge state P s+1 is produced as a result of the occurrence of a set of parallel updates. Update actions do not modify the current or any of the previous knowledge states. They only affect the successor state: the precondition of the action is evaluated in the current state and the postcondition updates the successor state.

39 39 Projects and updates A project j:C denotes the intention of some agent i of proposing the updating the theory of agent j with C. i÷C denotes an update proposed by i of the current theory of some agent j with C. wilma:C Fred ÷ C

40 40 Representation of conflicting information and preferences Preferences may resolve conflicting information. This example models a situation where an agent, Fabio, receives conflicting advice from two reliable authorities. Let (P,R) be the initial theory of Fabio, where R={} and dont(A)  fa(noA), not do(A)(r 1 ) do(A)  ma(A), not dont(A)(r 2 ) false  do(A), fa(noA) false  dont(A), ma(A) r 1 < r 2  fr r 2 < r 1  mr P = fa=father advises ma=mother advises fr=father responsibility mr=mother responsibility

41 41 Suppose that Fabio wants to live alone, represented as LA. His mother advises him to do so, but the father advises not to do so: U 1 = { mother ÷ ma(LA), father ÷ fa(noLA) } Fabio accepts both updates, and therefore he is still unable to choose either do(LA) or dont(LA) and, as a result, does not perform any action whatsoever.

42 42 Afterwards, Fabio's parents separate and the judge assigns responsibility over Fabio to the mother: U 2 = { judge ÷ mr } Now the situation changes since the second priority rule gives preference to the mother's wishes, and therefore Fabio can happily conclude ”do live alone”.

43 43 Updating preferences Within the theory of an agent both rules and preferences can be updated. The updating process is triggered by means of external or internal projects. Here internal projects of an agent are used to update its own priority rules.

44 44 Let the theory of Stan be characterized by : workLate  not party(r 1 ) P =party  not workLate(r 2 ) money  workLate(r 3 ) r 2 < r 1 % partying is preferred to working until late beautifulWoman  stan: wishGoOut wishGoOut,, not money  stan: getMoney R =wishGoOut, money  beautifulWoman: inviteOut getMoney  stan: r 1 < r 2 getMoney  stan: not r 2 < r 1 % to get money, Stan must update his priority rules


Download ppt "1 Preference Reasoning in Logic Programming José Júlio Alferes Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa,"

Similar presentations


Ads by Google