Download presentation

Presentation is loading. Please wait.

Published byCorey Goodman Modified over 2 years ago

1
First-order logic: a more expressive KR language that can handle facts, objects, relations between objects, and object properties. Consider the statement “Squares neighboring the wumpus are smelly”. In PL, to represent this statement we must use 16 sentences (for 4x4 grid) of the form S[i,j] --> W[i-1,j] v W[i,j-1] v W[i+1,j] v W[i,j+1], because the only entity that PL can represent are facts like “There is a wumpus in [1,3]”. In FOL, we can represent facts, objects, object properties and relations between objects. The above statement refers to two objects, wumpus and square, where object square has the property of being smelly, and the relationship between wumpus and square is that of being next to each other (neighboring). In FOL, this statement is represented by means of the following formula: square, neighboring(square,wumpus) => smelly(square)

2
FOL syntax Given –for each integer N > 0, a set of predicates of arity N (N is the number of arguments of a predicate), –for each integer N > 0, a set of functions of arity N, –a set of constants, and –a set of variables, we define –a set of all terms to include (i) all constants and variables, and (ii) all functions of arity N, f(t1,..., tn), where t1,..., tn are terms; –a set of all atomic sentences to be of the form p(t1,...,tn), where p is a predicate of arity N, and t1,..., tn are terms. A sentence (or wff) in FOL is: –each atomic sentence; –A v B, A & B, not A, A => B, A B, where A and B are sentences; – x1... xn, A and x1... xn, A where A is a sentence; –True, False are sentences.

3
Examples of FOL formulas The statement “Evil King John ruled England in 1200” consists of: objects: John, England, 1200 properties: evil, king relation: ruled Its FOL representation is: king(John) & evil(John) & ruled(John, England, 1200) Sentences in NL can be represented by means of different formulas. Example: “Everyone loves his/her mother” objects: X (“everyone”), Y (“mother”) function: mother-of relation: loves, mother-of representation 1: X, Y mother-of (X, Y) => loves(X, Y) representation 2: X loves(X, mother-of(X))

4
More examples In the following statements, X and Y are objects referring to mushroom1 and mushroom2, and mushroom(X), purple(X), and poisonous(X) are predicates. –“All purple mushrooms are poisonous” X, (mushroom(X) & purple(X)) => poisonous(X) –“No purple mushroom is poisonous” X, (mushroom(X) & purple(X)) => poisonous(X) –“All mushrooms are either purple or poisonous” X, (mushroom(X) => purple(X) v poisonous(X)) –“All mushrooms are either purple or poisonous, but not both” X, (mushroom(X) => ((purple(X) & poisonous(X)) v ( purple(X) & poisonous(X))) –“All purple mushrooms, except for one, are poisonous” X, (mushroom(X) & purple(X) & poisonous(X)) & ( Y, (mushroom(Y) & purple(Y) & & equal (X, Y)) => poisonous(Y)) –“There are exactly two purple mushrooms” X, Y (mushroom(X) & purple(X) & mushroom(Y) & purple(Y) & equal (X, Y) & & ( Z, mushroom(Z) & purple(Z)) => (equal(X, Z) v equal(Y, Z)))

5
More about terms Terms are logical expressions that refer to objects. If a term contains no variables, it is called a ground term. If a ground term contains a function symbol, it is called a complex term. Examples: mushroom1, mushroom2 are ground terms. mother-of(Bob) is a complex term. mother-of(X) is a term (not a ground term). Semantics of terms: An interpretation specifies a functional relation represented by a functional symbol whose arguments are terms representing objects.

6
More about atomic sentences Atomic sentences state facts in the domain. Examples: mushroom(x) brother(bob, anna) married(father-of(bob), mother-of(bob)) Semantics of atomic sentences: An atomic sentence is true, if the relation referred to by the predicate symbol holds between the objects referred to by the arguments under the given interpretation.

7
More about complex sentences Complex sentences represent connected facts in the domain. Examples: –¬brother(john, anna) –purple(X) & poisonous(X) –smelly(X) => next(X, wumpus) The semantics of complex sentences is the same as in PL (except for sentences containing quantifiers).

8
More about quantifiers Quantifiers allow reference to sets of objects. n The universal quantifier, , is like a conjunction with possibly infinite number of conjuncts. It allow us to represent sentences like: – “All children have parents” X, (child(X) => has-parent(X)) –“In all squares adjacent to the pit, the agent feels breeze” X,Y (next(X, Y) & pit(X) & in(agent, Y) => breeze(Y)) n The existential quantifier, , allows us to refer to a particular element (object) from the domain. Example: –“There exists a child who has a cat named Sally” X (child(X) & has-a-cat(X, name(Sally))) Quantification in FOL is always over objects in the domain, and not over functions or predicates.

9
Equivalencies involving quantifiers Formulas containing an existential quantifier can be converted to formulas involving universal quantifiers, and vice versa, as follows: ( X, A) ( X, A) ( X, A) ( X, A) Examples: “Everybody likes ice-cream” “There is no such a person who does not like ice-cream” X (likes (X, ice-cream)) X ( likes(X, ice-cream)) “Someone likes beer” “Not everybody does not like beer” X (likes(X, beer) X ( like (X, beer) We can nest quantifiers to represent sentences like: “Everybody loves some sport” ===> X Y (loves (X, Y)) “There is some food which everybody likes” ===> Y X (likes(X, Y))

10
Equality in first-order logic In FOL, atomic sentences can be build by using “=“ to indicate that two terms refer to the same object. For example: president(USA) = Barak_Obama Equality can be viewed as a predicate symbol with a predefined meaning of identity relation, i.e. a set of all pairs of objects in which both elements of each pair are the same object. For example: {...,,

11
The kinship domain example (AIMA, p. 254) Consider the “kinship” domain (domain is typically used to refer to a limited world that we want to formally represent). –Objects in this domain are people. –A property of people is gender. –Relationships between people are parenthood, brotherhood, etc. –To represent objects, object properties and relationships between objects, we will use the following predicates and functions: unary predicates: male, female; binary predicates: parent, sibling, brother, sister, etc.; functions: mother, father. “One’s mother is one’s female parent” m, c mother(c) = m female(m) & parent(m,c) “One’s husband is one’s male spouse” w, h husband(h,w) male(h) & spouse(h,w) “Male and female are disjoint categories” x male(x) not (female(x))

12
Example (cont.) “Parent and child are inverse relationships” p, c parent(p,c) child(c,p) “A grandparent is a parent of one’s parent” g, c grandparent(g,c) p (parent(g,p) & parent(p,c)) “A sibling is another child of one’s parents” x, y sibling(x,y) not(x=y) & p (parent(p,x) & parent(p,y)) The representation of any domain starts with deciding on the set of basic predicates in terms of which all other predicates can be defined. In the kinship domain, the set of basic predicates include child, spouse, male, female, and parent. The ultimate goal is to have the set of basic predicates describing the domain fully specified.

13
Answering queries in FOL Consider a knowledge base representing the “kinship” domain. All of the above formulas are axioms. More axioms are for example, parent(Anna, Bob) and parent(Bob, Carl). We want our KB to answer queries by deriving other axioms from the initial set of axioms, such as “Is Anna a grandparent of Carl?” The set of axioms can be divided into: –independent axioms, i.e. axioms that cannot be derived by other axioms, and –redundant axioms, i.e. axioms that can be derived from other axioms, but having them in the KB makes the inference process more efficient. Some of the axioms provide predicate definitions by showing for what objects a given predicate holds. For example, assuming that there are only 3 red objects in the domain, the definition of predicate red is the following: x, red(x) (x = apple-1) v (x = pen-4) v (x = ball-2)

14
Answering queries (cont.) Not all predicates can be fully characterized; we may not have enough information to do this. For example, person(x) may be very hard to define, if we do not have a complete specification for each person in the domain. This problem can be solved by providing partial definitions for such predicates. Example: Instead of defining person(x) by means of x person(x)... we can enumerate all partial specifications of the form x... => person(x) x person(x) =>... To construct proofs in FOL, we need a formal system consisting of axioms and inference rules. Consider all PL rules, plus the following one, called the universal elimination rule: x, A A’, where all occurrences of x are substituted with the term t Example: x likes(x, ice-cream) {x / Ana}, likes(Ana, ice-cream)

15
Example Consider a robot cleaning a lobby, who is unable to do its job if someone is in the lobby. Assume the robot knows the following: axiom 1 person(Fred) axiom 2 location(Fred, lobby) axiom 3 x,y (person(x) & location(x,y)) => occupied(y) Can the robot do his job? If occupied(lobby) can be inferred from axioms 1 - 3, the answer must be “no”. The inference process is carried out as follows: From 1, 3 and the universal elimination rule with {x / Fred}, we get axiom 4 y (person(Fred) & location(Fred, y)) => occupied(y) From 2, 4 and the universal elimination rule with {y / lobby}, we get axiom 5 person(Fred) & location(Fred, lobby)) => occupied(lobby) From 1, 2, 5, the And-introduction and the MP rules, we get axiom 6 occupied(lobby)

16
FOL-based simple reflex agent for the wumpus world A reflex agent has rules directly connecting its percepts to actions. Assuming that the time is taken into account, the following is an example percept: Percept([Stench, Breeze, Glitter, None, None], 5) If this percept is received, then the agent must grab the gold. The following formula captures the rule connecting the percept to the action Grab: s,b,u,c,t Percept([s,b, Glitter,u,c], t) => Action(Grab,t) An equivalent way to represent this rule is to use an intermediate rule to interpret the percept first, and then to search for an appropriate action, i.e. s,b,u,c,t Percept([s,b, Glitter,u,c], t) => At_Gold(t) t At_Gold(t) => Action(Grab,t) There are 2 problems with this representation: (1) infinite loops are possible, and (2) certain actions may never be performed if the percept sequence does not include related percepts.

17
FOL-based agents with an internal state Consider the following ways to represent internal models: 1By enumerating declarative knowledge (facts), such as “my car is parked in front of the building”, “I am carrying the gold”, etc. 2By explaining how the world evolves. This can be done by means of so-called diachronic rules, such as “it was sunny in the morning, but now it is raining”, “an hour ago I was driving to the University, but now I am at the University”. Note that the declarative representation of such diachronic rules require knowledge revision, i.e. weather(sunny, morning) must be replaced by weather(rainy, noon), and at(agent, car) must be replaced by at(agent, University). Problems with these representations: The agent can answer queries only about his present state of mind, because all knowledge about previous states are forgotten. Alternative situations cannot be examined simultaneously.

18
FOL-based agents with an internal state (cont.) For the agent to “remember” previous states or to explore alternative situations, he must keep many KBs as part of its internal model, where each KB describes a unique state of the world. The agent can explore alternative situations by switching between the KBs. Assumption-Based Truth Maintenance Systems (ATMS) provide a practical way to implement this idea. ATMS, however, cannot explore alternative situations simultaneously, because it assumes that alternative worlds are internally consistent. The Contradiction-Tolerant Truth Maintenance System (CTMS) expands this idea to allow contradictory worlds to be explored at the same time. Another solution which also makes its possible to keep track of the previous states is provided by the so-called situation calculus. The basic idea is to view the changing world as a sequence of situations, where each next situation results from the previous one after applying some action to it (see AIMA, page 329 fig. 10.2)

19
Example Assume that the gold is present in a given situation, and the agent grabs it. We want to state that in any further situation, the agent will have the gold. Portable (Gold) s At_Gold => Present (Gold, s) s, x Present (x, s) & Portable (x) => Holding(x, Result(Grab, s)) To say that the agent is not holding anything after the Release action: x, s not Holding (x, Result(Release, s) These are called effect axioms; they describe how the world changes. Effect axioms alone are not sufficient to keep track of whether the agent is holding the gold. In addition, we need the so-called frame axioms, which state how the world stays the same.

20
Example (cont.) The frame axioms stating that (1) if the agent is holding something, and he does not or cannot release it, then the agent will be holding it in the next state, and (2) if the agent is not holding something and he does not or cannot grab it, then the agent will not be holding it in the next state are: a,x,s Holding(x,s) & (a != Release) => Holding(x, Result(a,s)) a,x,s not Holding(x,s) & (a != Grab V not (Present(x,s) & Portable(x)) => => not Holding(x, Result(a,s)) We can combine the effect and frame axioms to completely describe the Holding predicate by means of the following successor-state axiom: a,x,s Holding(x, Result(a,s)) [(a = Grab & Present(x,s) & Portable(x)) V V (Holding(x,s) & a != Release)] For each predicate that changes over time, there must be a corresponding successor-state axiom.

21
The WW continued: How the agent knows his location without being able to perceive it directly In the Wumpus world, the agent knows the following: His initial location: At(Agent, [1,1], S0) Direction he is facing: Orientation(Agent, S0) = 0 How the world is arranged, i.e. given the location and orientation, what will be the agent's new position: x,y LocationToward([x,y], 0) = [x+1, y] x,y LocationToward([x,y], 90) = [x, y+1] x,y LocationToward([x,y], 180) = [x-1, y] x,y LocationToward([x,y], 270) = [x, y-1] From this location/orientation description, the agent can compute which square is directly ahead of him: l,s At(Agent, l, s) => => LocationAhead(Agent, s) = LocationToward(l, Orientation(Agent, s))

22
Keeping track of location (cont.) To define if two locations are adjacent to each other, we can say: l1, l2 Adjacent(l1, l2) d (l1 = LocationToward(l2, d)) To define if the agent is at the boundaries of his world, assuming 4 x 4 cave: x, y Wall([x,y]) (x = 0 V x = 5 V y = 0 V y = 5) Now we can define the successor-state axioms for location and orientation. Successor-state axiom for location (note that location can only be affected by action Forward): l, a, s At(Agent, l, Result(a, s)) [ (a = Forward & l = LocationAhead(Agent, s) & not Wall(l)) V V (a != Forward & At(Agent, l, s) ] Successor-state axiom for orientation (note that orientation is affected by actions Turn(Left) and Turn(Right)): a, d, s Orientation(Agent, Result(a, s)) = d [ (a = Turn(Right) & d = Mod(Orientation(Agent, s) - 90), 360)) V V (a = Turn(Left) & d = Mod(Orientation(Agent, s) + 90), 360)) V V (Orientation(Agent, s) = d & not (a = Turn(Right) V a = Turn(Left))) ]

23
Definition of predicate Alive We must also state the successor-state axiom for predicate Alive: a, s (Alive(Wumpus, Result(a,s)) [ Alive(Wumpus, s) & & not (a = Shoot & Has(Agent, Arrow, s) & Facing(Agent, Wumpus, s)) ] More definitions needed: a, s Has(Agent, Arrow, Result(a,s)) [ Has(Agent, Arrow, s) & (a != Shoot) ] Facing(Agent, Wumpus, s) can be defined in terms of the location of the agent and wumpus, and the orientation of the agent.

24
Deducing hidden properties of the world Assume that the agent knows where he is. Then, he can deduce properties associated with specific locations by means of the following diagnostic rules: l, s At(Agent, l, s) & Breeze(s) => Breezy(l) l, s At(Agent, l, s) & Stench(s) => Smelly(l) Knowing places which are breezy and smelly, the agent can reason about safe (unsafe) locations: l1, s Smelly(l1) => [ l2 At(Wumpus, l2, s) & (l1 = l2 V Adjacent(l1, l2) ] The same reasoning can be performed with the following causal rules: l1, l2, s At(Wumpus, l1, s) & Adjacent(l1, l2) => Smelly(l2) l1, l2, s At(Pit, l1, s) & Adjacent(l1, l2) => Breezy(l2) Both, causal and diagnostic rules are synchronic rules, because they represent relations that hold in the same world state.

25
Model-based reasoning KBSs that use causal rules to solve problems are called model-based reasoning systems. Causal rules are stronger than diagnostic rule. Compare the following two rules inferring OK(y): Diagnostic rule: x, y, g, u, c, s Percept([None, None, g, u, c], s) & At(Agent, x, s) & & Adjacent(x, y) => OK(y) Causal rule (note that a location can be O.K. even if it is breezy or smelly): y, s (not At(Wumpus, y, s) & not Pit(y) OK(y) The main difference between diagnostic and model-based reasoning is that is the later requires an explicit model of the domain not just a specification of the relations in that domain.

26
FOL goal-based agents for the Wumpus world Goal-based agents have an internal state like agents with an internal state, but they also have an explicitly stated goal, such as s Holding(Gold,s) => GoalLocation([1,1], s) To reach the goal, the agent must produce a sequence of actions, the latest of which matches the goal. This can be done in three possible ways: 1By applying an inference procedure. 2By applying a search procedure (for example, best-first search with an appropriate heuristic function) 3By applying a special-purpose planning procedure to identify the right sequence of actions to achieve the goal.

Similar presentations

OK

Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.

Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Download ppt on bhuj earthquake 2001 Ppt on landing gear systems Ppt on different layers of atmosphere Ppt on e-retailing in india Ppt on first conditional exercise Training ppt on email etiquettes Ppt on interesting facts of india Ppt on power grid failure 2015 Ppt on animals our friends Ppt on population census 2011 india