1 CS 2710, ISSP 2160 The Situation Calculus KR and Planning Some final topics in KR.

Slides:



Advertisements
Similar presentations
Russell and Norvig Chapter 7
Advertisements

Situation Calculus for Action Descriptions We talked about STRIPS representations for actions. Another common representation is called the Situation Calculus.
Plan Generation & Causal-Link Planning 1 José Luis Ambite.
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Methods of Proof Chapter 7, Part II. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation.
Outline Recap Knowledge Representation I Textbook: Chapters 6, 7, 9 and 10.
Proof methods Proof methods divide into (roughly) two kinds: –Application of inference rules Legitimate (sound) generation of new sentences from old Proof.
Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 9 Jim Martin.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Logical Agents Chapter 7.
Methods of Proof Chapter 7, second half.
Knoweldge Representation & Reasoning
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
1 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: Satisfiability (Reading R&N: Chapter 7)
Logical Agents Chapter 7 (based on slides from Stuart Russell and Hwee Tou Ng)
Logical Agents. Knowledge bases Knowledge base = set of sentences in a formal language Declarative approach to building an agent (or other system): 
Propositional Logic: Methods of Proof (Part II)
Logical Agents (NUS) Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
CHAPTERS 7, 8 Oliver Schulte Logical Inference: Through Proof to Truth.
INTRODUCTION TO ARTIFICIAL INTELLIGENCE COS302 MICHAEL L. LITTMAN FALL 2001 Satisfiability.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 3 Logic Representations (Part 2)
1 Chapter 10 Knowledge Representation. 2 KR previous chapters: syntax, semantics, and proof theory of propositional and first-order logic Chapter 10:
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Propositional Logic: Methods of Proof (Part II) This lecture topic: Propositional Logic (two lectures) Chapter (previous lecture, Part I) Chapter.
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
S P Vimal, Department of CSIS, BITS, Pilani
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module Logic Representations.
1 CS 2710, ISSP 2610 Chapter 12 Knowledge Representation.
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
1 Logical Agents Chapter 7. 2 A simple knowledge-based agent The agent must be able to: –Represent states, actions, etc. –Incorporate new percepts –Update.
Automated Reasoning Early AI explored how to automated several reasoning tasks – these were solved by what we might call weak problem solving methods as.
Logical Agents Chapter 7. Outline Knowledge-based agents Logic in general Propositional (Boolean) logic Equivalence, validity, satisfiability.
1 Logical Inference Algorithms CS 171/271 (Chapter 7, continued) Some text and images in these slides were drawn from Russel & Norvig’s published material.
1 The Wumpus Game StenchBreeze Stench Gold Breeze StenchBreeze Start  Breeze.
Some Thoughts to Consider 8 How difficult is it to get a group of people, or a group of companies, or a group of nations to agree on a particular ontology?
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
Intro to Planning Or, how to represent the planning problem in logic.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 11 Jim Martin.
Planning I: Total Order Planners Sections
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
1 CS 2710, ISSP 2160 Chapter 12, Part 1 Knowledge Representation.
Computing & Information Sciences Kansas State University Wednesday, 13 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 10 of 42 Wednesday, 13 September.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
First-Order Logic Semantics Reading: Chapter 8, , FOL Syntax and Semantics read: FOL Knowledge Engineering read: FOL.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Logical Agents Russell & Norvig Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean)
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Inference in Propositional Logic (and Intro to SAT) CSE 473.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7 Part I. 2 Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic.
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
Computing & Information Sciences Kansas State University Wednesday, 04 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 17 of 42 Wednesday, 04 October.
Computing & Information Sciences Kansas State University Friday, 13 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 21 of 42 Friday, 13 October.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents. Inference : Example 1 How many variables? 3 variables A,B,C How many models? 2 3 = 8 models.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Inference in Propositional Logic (and Intro to SAT)
EA C461 – Artificial Intelligence Logical Agent
CS 416 Artificial Intelligence
Planning in Answer Set Programming
The Situation Calculus KR and Planning Some final topics in KR
Artificial Intelligence: Agents and Propositional Logic.
Introduction to Situation Calculus
Knowledge Representation I (Propositional Logic)
Methods of Proof Chapter 7, second half.
Presentation transcript:

1 CS 2710, ISSP 2160 The Situation Calculus KR and Planning Some final topics in KR

Situation Calculus Planning in propositional logic: Section 7.7 through Section Handouts 2

Other topics in KR Semantic Networks: [Description Logic: : we didn’t cover this] [Satisfiability and WalkSat: Intro to Section 7.6; 7.6.2: we didn’t cover this] 3

4 Actions, Situations, and Events The Situation Calculus The robot is in the kitchen. –in(robot,kitchen) He walks into the living room. –in(robot,livingRoom) in(robot,kitchen,2:02pm) in(robot,livingRoom,2:17pm) But what if you are not sure when it was? We can do something simpler than rely on time stamps… The Situation Calculus is a logic formalism for representing and reasoning about dynamic domains.

5 Situation Calculus Ontology Actions: terms, such as “forward” and “turn(right))” Situations: terms; initial situation, say s0, and all situations that are generated by applying an action to a situation. result(a,s) names the situation resulting when action a is done in situation s.

6 Situation Calculus Ontology continued Fluents: functions and predicates that vary from one situation to the next. By convention, the situation is the last argument of the fluent. ~holding(robot,gold,s0) Atemporal or eternal predicates and functions do not change from situation to situation. gold(g1). lastName(wumpus,smith). adjacent(livingRoom,kitchen).

7 Sequences of Actions Also useful to reason about action sequences All S resultSeq([],S) = S All A,Se,S resultSeq([A|Se],S) = resultSeq(Se,result(A,S)) resultSeq([a,b,c],so) is result(c,result(b,result(a,s0)

8 Modified Wumpus World Fluent predicates: at(O,X,S) and holding(O,S) –In our simple world, only the agent can hold a piece of gold, so for simplicity, only the gold and situation are arguments Initial situation: at(agent,[1,1],s0) ^ at(g1,[1,2],s0) But we want to exclude possibilities from the initial situation too…

9 Initial KB All O,X (at(O,X,s0)  [(O=agent ^ X = [1,1]) v (O=g1 ^ X = [1,2])]) All O ~holding(O,s0) Eternals: –gold(g1) ^ adjacent([1,1],[1,2]) ^ adjacent([1,2],[1,1]) etc.

10 Goal: g1 is in [1,1] Planning by answering the query: Exists S at(g1,[1,1],resultSeq(S,s0)) Solution: At(g1,[1,1],resultSeq([go([1,1],[1,2]),grab(g1),go([1,2],[1,1])],s0)) Let’s look at what has to go in the KB for such queries to be answered...

11 Possibility and Effect Axioms Possibility axioms: –Preconditions  poss(A,S) Effect axioms: –poss(A,S)  changes that result from that action

12 Axioms for our Wumpus World For brevity: we will omit universal quantifies that range over entire sentence. S ranges over situations, A ranges over actions, O over objects (including agents), G over gold, and X,Y,Z over locations.

13 Possibility Axioms The possibility axioms that an agent can –go between adjacent locations, –grab a piece of gold in the current location, and –release gold it is holding

14 Effect Axioms If an action is possible, then certain fluents will hold in the situation that results from executing the action –Going from X to Y results in being at Y –Grabbing the gold results in holding the gold –Releasing the gold results in not holding it

15 Frame Problem We run into the frame problem Effect axioms say what changes, but don’t say what stays the same A real problem, because (in a non-toy domain), each action affects only a tiny fraction of all fluents

16 Frame Problem (continued) One solution approach is writing explicit frame axioms, such as: (at(O,X,S) ^ ~(O=agent) ^ ~holding(O,S))  at(O,X,result(Go(Y,Z),S)) If something is at X in S, and it is not the agent, and also it is not something the agent holds, then O is still at X if the agent moves somewhere. F fluents and A actions: O(FA) axioms needed We can do something more efficient than this

17 Frame Problem What stays the same? A actions, F fluents, and E effects/action (worst case). Typically, E << F That is, the effects of an action are typically only a small set of all the things that could change Want O(AE) versus O(AF) solution

18 “Solving” the Frame Problem For each fluent, have successor-state axioms: Action is possible  (fluent is true in result state  action’s effect made it true v it was true before and action left it alone) Each of the E effects of each of the A actions is mentioned exactly once, so O(AE) axioms needed Note: we will return to this point later, after going through the wumpus world example

19 Initial KB (reminder) All O,X at(O,S,s0)  [O=agent ^ X = [1,1]) v (O=g1 ^ X = [1,2])] All O ~holding(O,s0) Eternals: –gold(g1) ^ adjacent([1,1],[1,2]) ^ adjacent([1,2],[1,1]). Trace through reasoning so far on board; state space handed out At this point, we are switching to variables being small case, constants upper case, following the text

4-5 are Successor-State Axioms 1.At(Agent, x, s)  Adjacent(x,y)  Poss(Go(x,y),s) 2.Gold(g)  At(Agent,x,s)  At(g, x, s)  Poss(Grab(g),s) 3.Holding(g,s)  Poss(Release(g),s) 4.Poss(a,s)  Holding(g,Result(a,s))  a = Grab(g) v (Holding(g,s)  a  Release(g))) 5.Poss(a,s)  (At(o,y,Result(a,s))  (a = Go(x,y)  (o = Agent v Holding(o,s))) v (At(o,y,s)  ¬(  z y  z  a = Go(y,z)  (o = Agent v Holding(o,s)))) 20

More explicit version; Replaced existential with universal in 5 1.All x,y,s ((At(Agent, x, s)  Adjacent(x,y))  Poss(Go(x,y),s) 2.All g,x,s ((Gold(g)  At(Agent,x,s)  At(g, x, s))  Poss (Grab(g),s)) 3.All g,s (Holding(g,s)  Poss(Release(g),s)) 4.All a,s,g (Poss(a,s)  (Holding(g,Result(a,s))  (a = Grab(g) v (Holding(g,s)  a  Release(g)))) 5.All a,s,o,y,z (Poss(a,s)  (At(o,y,Result(a,s))  ((a = Go(x,y)  (o = Agent v Holding(o,s))) v (At(o,y,s)  ¬(a = Go(y,z)  y  z  (o = Agent v Holding(o,s))))))) –Justification: previous 5 has ¬(  z … the change is justified because this is equivalent to all z ¬… 21

Same as previous, but without comments 1.All x,y,s ((At(Agent, x, s)  Adjacent(x,y))  Poss(Go(x,y),s) 2.All g,x,s ((Gold(g)  At(Agent,x,s)  At(g, x, s))  Poss (Grab(g),s)) 3.All g,s (Holding(g,s)  Poss(Release(g),s)) 4.All a,s,g (Poss(a,s)  (Holding(g,Result(a,s))  (a = Grab(g) v (Holding(g,s)  a  Release(g)))) 5.All a,s,o,y,z (Poss(a,s)  (At(o,y,Result(a,s))  ((a = Go(x,y)  (o = Agent v Holding(o,s))) v (At(o,y,s)  ¬(a = Go(y,z)  y  z  (o = Agent v Holding(o,s))))))) 22

A return to complexity Each of the E effects of each of the A actions is mentioned exactly once, so O(AE) axioms needed Notes: –an effect may be to make a fluent true (add it) or to make it false (delete it) –Counting axioms is a bit arbitrary, since a single axiom may mention a disjunction of add effects and/or a disjunction of delete effects (see the holding axiom for the blocks world) –It is true that each of the add or delete effects of each action is mentioned once 23

A return to complexity Each of the E effects of each of the A actions is mentioned exactly once, so O(AE) axioms needed total action mentions Another schema for successor state axioms: Action is possible  (fluent is true in result state  (action is one of the actions that makes it true (add effect) v it was true before and action is not one of the actions that makes it false (delete effect)) Thus, for each fluent (possible effect), we mention exactly once each action that has it as an effect (either making it true or making it false) Or, for each action, it is mentioned exactly once in a successor state axiom for each of its effects 24

Fall 2014 In class exercise – the blocks world [handout] 25

26 Qualification Problem Ensuring that all necessary conditions for an action’s success have been specified. No complete solution in logic. KR/planning designers have to decide how much detail to go into.

What did we see? A sophisticated KR scheme Important Problems in planning: –Addressed by successor-state axioms (Reiter 1991) Frame Problem (what stays the same?) Ramification Problem (implicit effects, such as that gold moves too if the agent moves and it is holding the gold) –Not addressed completely in logic Qualification Problem Concepts for planning, such as fluents and situations Planning as search 27

Stopped here, Fall

29 Semantic Networks Graphical aids for visualizing the knowledge base Efficient algorithms for inferring properties based on category membership Often, correspond to a subset of first-order logic Many variants All distinguish among individual objects, categories of objects and relations among objects

30 Example See figure 12.5 (next slide) Specify what edges and nodes mean In Figure 12.5, indivs and categories look the same memberOf(indiv,category) sisterOf(indiv,indiv) subsetOf(category,category) hasMother(indiv,indiv)

31

32 Semantic Networks Is hasMother(persons,femalePersons) consistent with the representation? Nope: hasMother is a relation between individuals cat1-- label  cat2 means: all X (X in cat1  (all Y label(X,Y)  Y in cat2)) (Note: this does not say that each person has a mother)

33 Semantic Networks cat – label  value All X (X in cat  label(X,value))

34 Inheritance Inheritance is efficient and convenient Trace paths from individuals to categories, inheriting properties as you go In Figure 12.5, how many legs does John have? Most specific (nearest) information wins

35 Semantic Networks In this type of semantic network, only binary relations are possible A richer representation is possible by reifying propositions and events (example: SNePS) This forces creation of a rich ontology of reified concepts; many current ideas originated in semantic network systems

36 Description Logics This won’t be tested on the exam, but I want you to know what description logics are Subset of full first order logic; a family of logics of increasing expressiveness; well studied; most are decidable; good link between theory and practice.

We stopped here Fall

Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation of new sentences from old. –Resolution –Forward & Backward chaining Model checking Searching through truth assignments. Improved backtracking: Davis--Putnam-Logemann-Loveland (DPLL) Heuristic search in model space: Walksat.

Model Checking Two families of efficient algorithms: Complete backtracking search algorithms: DPLL algorithm. You read this on your own for the midterm. Incomplete local search algorithms –WalkSAT algorithm

The DPLL algorithm Determine if an input propositional logic sentence (in CNF) is satisfiable. This is just backtracking search for a CSP. Improvements: 1.Early termination A clause is true if any literal is true. A sentence is false if any clause is false. 2.Pure symbol heuristic Pure symbol: always appears with the same "sign" in all clauses. e.g., In the three clauses (A   B), (  B   C), (C  A), A and B are pure, C is impure. Make a pure symbol literal true 3 Unit clause heuristic Unit clause: only one literal in the clause The only literal in a unit clause must be true. Note: literals can become a pure symbol or a unit clause when other literals obtain truth values. e.g.

The WalkSAT algorithm Incomplete, local search algorithm Evaluation function: The min-conflict heuristic of minimizing the number of unsatisfied clauses Balance between greediness and randomness See figure 7.18 (on your own)

WrapUp: Situation Calculus Planning in propositional logic: Section 7.7 through Section Handouts 42

WrapUp: Other topics in KR Semantic Networks: [not covered Fall 2014] 43