Presentation is loading. Please wait.

Presentation is loading. Please wait.

Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.

Similar presentations


Presentation on theme: "Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types."— Presentation transcript:

1 Intelligent Agents

2 Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence a modern approach2

3 Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators Human agent: – eyes, ears, and other organs for sensors; – hands, legs, mouth, and other body parts for actuators Robotic agent: – cameras and infrared range finders for sensors – various motors for actuators Artificial Intelligence a modern approach3

4 Agents and environments The agent function maps from percept histories to actions: [f: P* A] The agent program runs on the physical architecture to produce f agent = architecture + program Artificial Intelligence a modern approach4

5 Vacuum-cleaner world Percepts: location and contents, e.g., [A,Dirty] Actions: Left, Right, Suck, NoOp Agents function look-up table For many agents this is a very large table Artificial Intelligence a modern approach5 Demo: http://www.ai.sri.com/~oreilly/aima3ejava/aima3ejavademos.h tml http://www.ai.sri.com/~oreilly/aima3ejava/aima3ejavademos.h tml

6 Rational agents Rationality – Performance measuring success – Agents prior knowledge of environment – Actions that agent can perform – Agents percept sequence to date Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. Artificial Intelligence a modern approach6

7 Rationality Rational is different from omniscience Percepts may not supply all relevant information E.g., in card game, dont know cards of others. Rational is different from being perfect Rationality maximizes expected outcome while perfection maximizes actual outcome. Artificial Intelligence a modern approach7

8 Autonomy in Agents Extremes No autonomy – ignores environment/data Complete autonomy – must act randomly/no program Example: baby learning to crawl Ideal: design agents to have some autonomy Possibly become more autonomous with experience The autonomy of an agent is the extent to which its behaviour is determined by its own experience, rather than knowledge of designer.

9 PEAS PEAS: Performance measure, Environment, Actuators, Sensors Consider, e.g., the task of designing an automated taxi driver: – Performance measure: Safe, fast, legal, comfortable trip, maximize profits – Environment: Roads, other traffic, pedestrians, customers – Actuators: Steering wheel, accelerator, brake, signal, horn – Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard Artificial Intelligence a modern approach9

10 PEAS Agent: Part-picking robot Performance measure: Percentage of parts in correct bins Environment: Conveyor belt with parts, bins Actuators: Jointed arm and hand Sensors: Camera, joint angle sensors Artificial Intelligence a modern approach10

11 PEAS Agent: Interactive English tutor Performance measure: Maximize student's score on test Environment: Set of students Actuators: Screen display (exercises, suggestions, corrections) Sensors: Keyboard Artificial Intelligence a modern approach11

12 Environment types Fully observable (vs. partially observable) Deterministic (vs. stochastic) Episodic (vs. sequential) Static (vs. dynamic) Discrete (vs. continuous) Single agent (vs. multiagent): Artificial Intelligence a modern approach12

13 Fully observable (vs. partially observable) Is everything an agent requires to choose its actions available to it via its sensors? Perfect or Full information. If so, the environment is fully accessible If not, parts of the environment are inaccessible Agent must make informed guesses about world. In decision theory: perfect information vs. imperfect information. Artificial Intelligence a modern approach13 Cross WordBackgammonTaxi driverPart picking robotPokerImage analysis Fully Partially

14 Deterministic (vs. stochastic) Does the change in world state Depend only on current state and agents action? Non-deterministic environments Have aspects beyond the control of the agent Utility functions have to guess at changes in world Artificial Intelligence a modern approach14 Cross WordBackgammonTaxi driverPart picking robotPokerImage analysisCross WordBackgammonTaxi driverPartPokerImage analysis Deterministic Stochastic

15 Episodic (vs. sequential): Is the choice of current action Dependent on previous actions? If not, then the environment is episodic In non-episodic environments: Agent has to plan ahead: Current choice will affect future actions Artificial Intelligence a modern approach15 Cross WordBackgammonTaxi driverPart picking robotPokerImage analysis Sequential Episodic

16 Static (vs. dynamic): Static environments dont change While the agent is deliberating over what to do Dynamic environments do change So agent should/could consult the world when choosing actions Alternatively: anticipate the change during deliberation OR make decision very fast Semidynamic: If the environment itself does not change with the passage of time but the agent's performance score does. Artificial Intelligence a modern approach16 Cross WordBackgammonTaxi driverPart picking robotPoker Image analysis Static Dynamic Semi Another example: off-line route planning vs. on-board navigation system

17 Discrete (vs. continuous) A limited number of distinct, clearly defined percepts and actions vs. a range of values (continuous) Artificial Intelligence a modern approach17 Cross WordBackgammonTaxi driverPart picking robotPoker Image analysis Discrete Conti

18 Single agent (vs. multiagent): An agent operating by itself in an environment or there are many agents working together Artificial Intelligence a modern approach18 Cross WordBackgammonTaxi driverPart picking robotPoker Image analysis Single Multi

19 Summary. Artificial Intelligence a modern approach ObservableDeterministicStaticEpisodicAgentsDiscrete Cross Word Backgammon Taxi driver Part picking robot Poker Image analysis Deterministic Stochastic Deterministic Stochastic Sequential Episodic Static Dynamic Semi Discrete Conti Single Multi Fully Partially

20 Choice under (Un)certainty Artificial Intelligence a modern approach20 Fully Observable Deterministic Certainty: Search Uncertainty no yes no

21 Agent types Four basic types in order of increasing generality: Simple reflex agents Reflex agents with state/model Goal-based agents Utility-based agents All these can be turned into learning agents http://www.ai.sri.com/~oreilly/aima3ejava/aima3ejavadem os.html http://www.ai.sri.com/~oreilly/aima3ejava/aima3ejavadem os.html Artificial Intelligence a modern approach21

22 Simple reflex agents Artificial Intelligence a modern approach22

23 Simple reflex agents Simple but very limited intelligence. Action does not depend on percept history, only on current percept. Therefore no memory requirements. Infinite loops Suppose vacuum cleaner does not observe location. What do you do given location = clean? Left of A or right on B -> infinite loop. Fly buzzing around window or light. Fly buzzing Possible Solution: Randomize action. Thermostat. Chess – openings, endings Lookup table (not a good idea in general) 35 100 entries required for the entire game Artificial Intelligence a modern approach23

24 States: Beyond Reflexes Recall the agent function that maps from percept histories to actions: [f: P* A] An agent program can implement an agent function by maintaining an internal state. The internal state can contain information about the state of the external environment. The state depends on the history of percepts and on the history of actions taken: [f: P*, A* S A] where S is the set of states. If each internal state includes all information relevant to information making, the state space is Markovian. Artificial Intelligence a modern approach24

25 States and Memory: Game Theory If each state includes the information about the percepts and actions that led to it, the state space has perfect recall. Perfect Information = Perfect Recall + Full Observability. Artificial Intelligence a modern approach25

26 Goal-based agents knowing state and environment? Enough? – Taxi can go left, right, straight Have a goal A destination to get to Uses knowledge about a goal to guide its actions E.g., Search, planning Artificial Intelligence a modern approach26

27 Goal-based agents Artificial Intelligence a modern approach27 Reflex agent breaks when it sees brake lights. Goal based agent reasons – Brake light -> car in front is stopping -> I should stop -> I should use brake

28 Model-based reflex agents Artificial Intelligence a modern approach28 Know how world evolves Overtaking car gets closer from behind How agents actions affect the world Wheel turned clockwise takes you right Model base agents update their state

29 Utility-based agents Goals are not always enough Many action sequences get taxi to destination Consider other things. How fast, how safe….. A utility function maps a state onto a real number which describes the associated degree of happiness, goodness, success. Where does the utility measure come from? Economics: money. Biology: number of offspring. Your life? Artificial Intelligence a modern approach29

30 Utility-based agents Artificial Intelligence a modern approach30

31 Learning agents Artificial Intelligence a modern approach31 Performance element is what was previously the whole agent Input sensor Output action Learning element Modifies performance element.

32 Learning agents Artificial Intelligence a modern approach32 Critic: how the agent is doing Input: checkmate? Fixed Problem generator Tries to solve the problem differently instead of optimizing. Suggests exploring new actions -> new problems.

33 Learning agents(Taxi driver) Performance element How it currently drives Taxi driver Makes quick left turn across 3 lanes Critics observe shocking language by passenger and other drivers and informs bad action Learning element tries to modify performance elements for future Problem generator suggests experiment out something called Brakes on different Road conditions Exploration vs. Exploitation Learning experience can be costly in the short run shocking language from other drivers Less tip Fewer passengers Artificial Intelligence a modern approach33

34 Problems and Search Chapter 2

35 35 Outline State space search Search strategies Problem characteristics Design of search programs

36 36 State Space Search Problem solving = Searching for a goal state

37 37 State Space Search: Playing Chess Each position can be described by an 8-by-8 array. Initial position is the game opening position. Goal position is any position in which the opponent does not have a legal move and his or her king is under attack. Legal moves can be described by a set of rules:

38 38 State Space Search: Playing Chess State space is a set of legal positions. Starting at the initial state. Using the set of rules to move from one state to another. Attempting to end up in a goal state.

39 39 State Space Search: Summary 1.Define a state space that contains all the possible configurations of the relevant objects. 2.Specify the initial states. 3.Specify the goal states. 4.Specify a set of rules: What are unstated assumptions? How general should the rules be? How much knowledge for solutions should be in the rules?

40 40 Search Strategies Requirements of a good search strategy: 1. It causes motion Otherwise, it will never lead to a solution. 2. It is systematic Otherwise, it may use more steps than necessary. 3. It is efficient Find a good, but not necessarily the best, answer.

41 41 Search Strategies 1. Uninformed search (blind search) Having no information about the number of steps from the current state to the goal. 2. Informed search (heuristic search) More efficient than uninformed search.

42 42 Search Strategies: Blind Search Breadth-first search Expand all the nodes of one level first. Depth-first search Expand one of the nodes at the deepest level.

43 43 Search Strategies: Blind Search CriterionBreadth- First Depth- First Time Space Optimal? Complete? b: branching factord: solution depthm: maximum depth

44 44 Search Strategies: Blind Search CriterionBreadth- First Depth- First Time bdbd bmbm Space bdbd bm Optimal?YesNo Complete?YesNo b: branching factord: solution depthm: maximum depth

45 45 Search Strategies: Heuristic Search Heuristic: involving or serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods. (Merriam-Websters dictionary) Heuristic technique improves the efficiency of a search process, possibly by sacrificing claims of completeness or optimality.

46 46 Search Strategies: Heuristic Search The Travelling Salesman Problem A salesman has a list of cities, each of which he must visit exactly once. There are direct roads between each pair of cities on the list. Find the route the salesman should follow for the shortest possible round trip that both starts and finishes at any one of the cities. A B C DE 110 55 515

47 47 Search Strategies: Heuristic Search Nearest neighbour heuristic: 1. Select a starting city. 2. Select the one closest to the current city. 3. Repeat step 2 until all cities have been visited.

48 48 Search Strategies: Heuristic Search Nearest neighbour heuristic: 1. Select a starting city. 2. Select the one closest to the current city. 3. Repeat step 2 until all cities have been visited.

49 49 Search Strategies: Heuristic Search Heuristic function: state descriptions measures of desirability

50 50 Problem Characteristics To choose an appropriate method for a particular problem: Is the problem decomposable? Can solution steps be ignored or undone? Is the universe predictable? Is a good solution absolute or relative? Is the solution a state or a path? What is the role of knowledge? Does the task require human-interaction?

51 51 Is the problem decomposable? Can the problem be broken down to smaller problems to be solved independently? Decomposable problem can be solved easily.

52 52 Is the problem decomposable? (x 2 + 3x + sin 2 x.cos 2 x)dx x 2 dx 3xdx sin 2 x.cos 2 xdx (1 cos 2 x)cos 2 xdx cos 2 xdx cos 4 xdx

53 53 Can solution steps be ignored or undone? Theorem Proving A lemma that has been proved can be ignored for next steps. Ignorable!

54 54 Can solution steps be ignored or undone? The 8-Puzzle Moves can be undone and backtracked. Recoverable! 283 164 75 123 84 765

55 55 Can solution steps be ignored or undone? Playing Chess Moves cannot be retracted. Irrecoverable!

56 56 Can solution steps be ignored or undone? Ignorable problems can be solved using a simple control structure that never backtracks. Recoverable problems can be solved using backtracking. Irrecoverable problems can be solved by recoverable style methods via planning.

57 57 Is the universe predictable? The 8-Puzzle Every time we make a move, we know exactly what will happen. Certain outcome!

58 58 Is the universe predictable? Playing Bridge We cannot know exactly where all the cards are or what the other players will do on their turns. Uncertain outcome!

59 59 Is the universe predictable? For certain-outcome problems, planning can used to generate a sequence of operators that is guaranteed to lead to a solution. For uncertain-outcome problems, a sequence of generated operators can only have a good probability of leading to a solution. Plan revision is made as the plan is carried out and the necessary feedback is provided.

60 60 Is a good solution absolute or relative? 1.Marcus was a man. 2.Marcus was a Pompeian. 3.Marcus was born in 40 A.D. 4.All men are mortal. 5.All Pompeians died when the volcano erupted in 79 A.D. 6.No mortal lives longer than 150 years. 7.It is now 2004 A.D.

61 61 Is a good solution absolute or relative? 1.Marcus was a man. 2.Marcus was a Pompeian. 3.Marcus was born in 40 A.D. 4.All men are mortal. 5.All Pompeians died when the volcano erupted in 79 A.D. 6.No mortal lives longer than 150 years. 7.It is now 2004 A.D. Is Marcus alive?

62 62 Is a good solution absolute or relative? 1.Marcus was a man. 2.Marcus was a Pompeian. 3.Marcus was born in 40 A.D. 4.All men are mortal. 5.All Pompeians died when the volcano erupted in 79 A.D. 6.No mortal lives longer than 150 years. 7.It is now 2004 A.D. Is Marcus alive? Different reasoning paths lead to the answer. It does not matter which path we follow.

63 63 Is a good solution absolute or relative? The Travelling Salesman Problem We have to try all paths to find the shortest one.

64 64 Is a good solution absolute or relative? Any-path problems can be solved using heuristics that suggest good paths to explore. For best-path problems, much more exhaustive search will be performed.

65 65 Is the solution a state or a path? Finding a consistent intepretation The bank president ate a dish of pasta salad with the fork. bank refers to a financial situation or to a side of a river? dish or pasta salad was eaten? Does pasta salad contain pasta, as dog food does not contain dog? Which part of the sentence does with the fork modify? What if with vegetables is there? No record of the processing is necessary.

66 66 Is the solution a state or a path? A path-solution problem can be reformulated as a state-solution problem by describing a state as a partial path to a solution. The question is whether that is natural or not.

67 67 What is the role of knowledge Playing Chess Knowledge is important only to constrain the search for a solution. Reading Newspaper Knowledge is required even to be able to recognize a solution.

68 68 Does the task require human-interaction? Solitary problem, in which there is no intermediate communication and no demand for an explanation of the reasoning process. Conversational problem, in which intermediate communication is to provide either additional assistance to the computer or additional information to the user.

69 1/21/2014Constraint Satisfaction 69 Constraint Satisfaction Problems

70 1/21/2014Constraint Satisfaction 70 Outline Constraint Satisfaction Problems (CSP) Backtracking search for CSPs Local search for CSPs

71 1/21/2014Constraint Satisfaction 71 Constraint satisfaction problems (CSPs) Standard search problem: state is a "black box – any data structure that supports successor function, heuristic function, and goal test CSP: state is defined by variables X i with values from domain D i goal test is a set of constraints specifying allowable combinations of values for subsets of variables Simple example of a formal representation language Allows useful general-purpose algorithms with more power than standard search algorithms

72 1/21/2014Constraint Satisfaction 72 Example: Map-Coloring Variables WA, NT, Q, NSW, V, SA, T Domains D i = {red,green,blue} Constraints: adjacent regions must have different colors e.g., WA NT, or (WA,NT) in {(red,green),(red,blue),(green,red), (green,blue),(blue,red),(blue,green)}

73 1/21/2014Constraint Satisfaction 73 Example: Map-Coloring Solutions are complete and consistent assignments, e.g., WA = red, NT = green,Q = red,NSW = green,V = red,SA = blue,T = green

74 1/21/2014Constraint Satisfaction 74 Constraint graph Binary CSP: each constraint relates two variables Constraint graph: nodes are variables, arcs are constraints

75 1/21/2014Constraint Satisfaction 75 Varieties of CSPs Discrete variables finite domains: n variables, domain size d O(d n ) complete assignments e.g., Boolean CSPs, incl.~Boolean satisfiability (NP-complete) infinite domains: integers, strings, etc. e.g., job scheduling, variables are start/end days for each job need a constraint language, e.g., StartJob 1 + 5 StartJob 3 Continuous variables e.g., start/end times for Hubble Space Telescope observations linear constraints solvable in polynomial time by linear programming

76 1/21/2014Constraint Satisfaction 76 Varieties of constraints Unary constraints involve a single variable, e.g., SA green Binary constraints involve pairs of variables, e.g., SA WA Higher-order constraints involve 3 or more variables, e.g., cryptarithmetic column constraints

77 1/21/2014Constraint Satisfaction 77 Example: Cryptarithmetic Variables: F T U W R O X 1 X 2 X 3 Domains: {0,1,2,3,4,5,6,7,8,9} Constraints: Alldiff (F,T,U,W,R,O) O + O = R + 10 · X 1 X 1 + W + W = U + 10 · X 2 X 2 + T + T = O + 10 · X 3 X 3 = F, T 0, F 0

78 1/21/2014Constraint Satisfaction 78 Real-world CSPs Assignment problems e.g., who teaches what class Timetabling problems e.g., which class is offered when and where? Transportation scheduling Factory scheduling Notice that many real-world problems involve real-valued variables

79 1/21/2014Constraint Satisfaction 79 Standard search formulation (incremental) Let's start with the straightforward approach, then fix it States are defined by the values assigned so far Initial state: the empty assignment { } Successor function: assign a value to an unassigned variable that does not conflict with current assignment fail if no legal assignments Goal test: the current assignment is complete 1.This is the same for all CSPs 2.Every solution appears at depth n with n variables use depth-first search 3.Path is irrelevant, so can also use complete-state formulation 4.b = (n - l )d at depth l, hence n! · d n leaves

80 1/21/2014Constraint Satisfaction 80 Backtracking search Variable assignments are commutative}, i.e., [ WA = red then NT = green ] same as [ NT = green then WA = red ] Only need to consider assignments to a single variable at each node b = d and there are $d^n$ leaves Depth-first search for CSPs with single-variable assignments is called backtracking search Backtracking search is the basic uninformed algorithm for CSPs Can solve n-queens for n 25

81 1/21/2014Constraint Satisfaction 81 Backtracking search

82 1/21/2014Constraint Satisfaction 82 Backtracking example

83 1/21/2014Constraint Satisfaction 83 Backtracking example

84 1/21/2014Constraint Satisfaction 84 Backtracking example

85 1/21/2014Constraint Satisfaction 85 Backtracking example

86 1/21/2014Constraint Satisfaction 86 Improving backtracking efficiency General-purpose methods can give huge gains in speed: Which variable should be assigned next? In what order should its values be tried? Can we detect inevitable failure early?

87 1/21/2014Constraint Satisfaction 87 Most constrained variable Most constrained variable: choose the variable with the fewest legal values a.k.a. minimum remaining values (MRV) heuristic

88 1/21/2014Constraint Satisfaction 88 Most constraining variable Tie-breaker among most constrained variables Most constraining variable: choose the variable with the most constraints on remaining variables

89 1/21/2014Constraint Satisfaction 89 Least constraining value Given a variable, choose the least constraining value: the one that rules out the fewest values in the remaining variables Combining these heuristics makes 1000 queens feasible

90 1/21/2014Constraint Satisfaction 90 Forward checking Idea: Keep track of remaining legal values for unassigned variables Terminate search when any variable has no legal values

91 1/21/2014Constraint Satisfaction 91 Forward checking Idea: Keep track of remaining legal values for unassigned variables Terminate search when any variable has no legal values

92 1/21/2014Constraint Satisfaction 92 Forward checking Idea: Keep track of remaining legal values for unassigned variables Terminate search when any variable has no legal values

93 1/21/2014Constraint Satisfaction 93 Forward checking Idea: Keep track of remaining legal values for unassigned variables Terminate search when any variable has no legal values

94 1/21/2014Constraint Satisfaction 94 Constraint propagation Forward checking propagates information from assigned to unassigned variables, but doesn't provide early detection for all failures: NT and SA cannot both be blue! Constraint propagation repeatedly enforces constraints locally

95 1/21/2014Constraint Satisfaction 95 Arc consistency Simplest form of propagation makes each arc consistent X Y is consistent iff for every value x of X there is some allowed y

96 1/21/2014Constraint Satisfaction 96 Arc consistency Simplest form of propagation makes each arc consistent X Y is consistent iff for every value x of X there is some allowed y

97 1/21/2014Constraint Satisfaction 97 Arc consistency Simplest form of propagation makes each arc consistent X Y is consistent iff for every value x of X there is some allowed y If X loses a value, neighbors of X need to be rechecked

98 1/21/2014Constraint Satisfaction 98 Arc consistency Simplest form of propagation makes each arc consistent X Y is consistent iff for every value x of X there is some allowed y If X loses a value, neighbors of X need to be rechecked Arc consistency detects failure earlier than forward checking Can be run as a preprocessor or after each assignment

99 1/21/2014Constraint Satisfaction 99 Arc consistency algorithm AC-3 Time complexity: O(n 2 d 3 )

100 UNIT-2 LOGICAL REASONING Propositional and Predicate Logic

101 Motivation formal methods to perform reasoning are required when dealing with knowledge propositional logic is a simple mechanism for basic reasoning tasks it allows the description of the world via sentences simple sentences can be combined into more complex ones new sentences can be generated by inference rules applied to existing sentences predicate logic is more powerful, but also considerably more complex it is very general, and can be used to model or emulate many other methods

102 Knowledge Base Knowledge Base : set of sentences represented in a knowledge representation language and represents assertions(statement) about the world. Inference rule: when one ASKs questions of the KB, the answer should follow from what has been TELLed to the KB previously.

103 Logical Inference also referred to as deduction validity a sentence is valid if it is true under all possible interpretations in all possible world states valid sentences are also called tautologies satisfiability a sentence is satisfiable if there is some interpretation (understanding) in some world state (a model) such that the sentence is true a sentence is satisfiable iff its negation is not valid a sentence is valid iff its negation is not satisfiable

104 Computational Inference computers cannot reason informally (common sense) they dont know the interpretation of the sentences they usually dont have access to the state of the real world to check the correspondence between sentences and facts computers can be used to check the validity of sentences if the sentences in a knowledge base are true, then the sentence under consideration must be true, regardless of its possible interpretations can be applied to rather complex sentences

105 Computational Approaches to Inference model checking based on truth tables generate all possible models and check them for validity or satisfiability exponential complexity, NP-complete all combinations of truth values need to be considered search use inference rules as successor functions for a search algorithm also exponential, but only worst-case in practice, many problems have shorter proofs

106 Propositional Logic a relatively simple framework for reasoning can be extended for more expressiveness at the cost of computational overhead important aspects syntax semantics validity and inference models inference rules complexity

107 Syntax symbols logical constants True, False propositional symbols P, Q, … logical connectives conjunction, disjunction, negation, implication, equivalence parentheses, sentences constructed from simple sentences conjunction, disjunction, implication, equivalence, negation

108 Propositional Logic Sentence AtomicSentence | ComplexSentence AtomicSentence True | False | P | Q | R |... ComplexSentence (Sentence ) | Sentence Connective Sentence | Sentence Connective | | | ambiguities are resolved through precedence or parentheses e.g. P Q R S is equivalent to ( P) (Q R)) S

109 Semantics interpretation of the propositional symbols and constants symbols can stand for any arbitrary fact sentences consisting of only a propositional symbols are satisfiable, but not valid the value of the symbol can be True or False the constants True and False have a fixed interpretation True indicates that the world is as stated False indicates that the world is not as stated specification of the logical connectives frequently explicitly via truth tables

110 Truth Tables for Connectives P True True False False P Q False True P Q False True P Q True False True P Q True False True Q False True False True P False True

111 Validity and Inference truth tables can be used to test sentences for validity one row for each possible combination of truth values for the symbols in the sentence the final value must be True for every sentence a variation of the model checking approach not very practical for large sentences sometimes used with customized improvements in specific domains, such as VLSI design

112 Wumpus World PEAS description Performance measure gold +1000, death -1000 -1 per step, -10 for using the arrow Environment Squares adjacent to wumpus are smelly Squares adjacent to pit are breezy Glitter iff gold is in the same square Shooting kills wumpus if you are facing it Shooting uses up the only arrow Grabbing picks up gold if in same square Releasing drops the gold in same square

113 Sensors: Stench, Breeze, Glitter, Bump, Scream Actuators: Left turn, Right turn, Forward, Grab, Release, Shoot

114 Exploring a wumpus world

115

116

117

118

119

120

121

122 Validity Example known facts about the Wumpus World there is a wumpus in [1,3] or in [2,2] there is no wumpus in [2,2] question (hypothesis) is there a wumpus in [1,3] task prove or disprove the validity of the question approach construct a sentence that combines the above statements in an appropriate manner so that it answers the questions construct a truth table that shows if the sentence is valid incremental approach with truth tables for sub-sentences

123 Validity Example W 22 False True False True W 13 False True Q False True False True P False True P Q False True Interpretation: W 13 Wumpus in [1,3] W 22 Wumpus in [2,2] Facts: there is a wumpus in [1,3] or in [2,2] W 13 W 22 False True

124 Validity Example W 22 True False True False W 13 W 22 False True Q False True False True P False True P Q False True Interpretation: W 13 Wumpus in [1,3] W 22 Wumpus in [2,2] Facts: there is a wumpus in [1,3] or in [2,2] there is no wumpus in [2,2] P False True

125 Validity Example W 22 True False True False W 13 W 22 False True (W 13 W 22 ) W 22 False True False W 13 False True P Q True False True Q False True False True P False True Question: can we conclude that the wumpus is in [1,3]? P Q True False True Q False True False True P False True

126 Validity Example ((W 13 W 22 ) W 22 ) W 13 True Valid Sentence: For all possible combinations, the value of the sentence is true. W 22 True False True False W 13 W 22 False True (W 13 W 22 ) W 22 False True False W 13 False True

127 Validity and Computers the computer has no access to the real world, and cant check the truth value of individual sentences (facts) humans often can do that, which greatly decreases the complexity of reasoning humans also have experience in considering only important aspects, neglecting others if a conclusion can be drawn from premises, independent of their truth values, then the sentence is valid usually too tedious for humans may exclude potentially interesting sentences some, but not all interpretations are true

128 Models if there is an interpretation for a sentence such that the sentence is true in a particular world, that world is called a model refers to specific interpretations models can also be thought of as mathematical objects a model then is a mapping from proposition symbols to True or False

129 Models and Entailment a sentence is entailed by a knowledge base KB if the models of the knowledge base KB are also models of the sentence KB |=

130 Inference and Derivation inference rules allow the construction of new sentences from existing sentences notation: a sentence can be derived from an inference procedure generates new sentences on the basis of inference rules if all the new sentences are entailed, the inference procedure is called sound or truth-preserving |- or

131 Inference Rules modus ponens from an implication and its premise one can infer the conclusion and-elimination from a conjunct, one can infer any of the conjuncts and-introduction from a list of sentences, one can infer their conjunction or-introduction from a sentence, one can infer its disjunction with anything else, 1 2... n i 1, 2, …, n 1 2... n i 1 2... n

132 Inference Rules double-negation elimination a double negations infers the positive sentence unit resolution if one of the disjuncts in a disjunction is false, then the other one must be true resolution cannot be true and false, so one of the other disjuncts must be true can also be restated as implication is transitive,,,

133 Complexity the truth-table method to inference is complete enumerate the 2 n rows of a table involving n symbols computation time is exponential satisfiability for a set of sentences is NP-complete so most likely there is no polynomial-time algorithm in many practical cases, proofs can be found with moderate effort there is a class of sentences with polynomial inference procedures (Horn sentences or Horn clauses) P 1 P 2... P n Q

134 Wumpus Logic an agent can use propositional logic to reason about the Wumpus world knowledge base contains percepts rules S 1,1 S 2,1 S 1,2 B 1,1 B 2,1 B 1,2 R1: S 1,1 W 1,1 W 1,2 W 2,1 R2: S 2,1 W 1,1 W 2,1 W 2,2 W 3,1 R3: S 1,2 W 1,1 W 1,2 W 2,2 W 1,3 R4: S 1,2 W 1,1 W 1,2 W 2,2 W 1,3...

135 Finding the Wumpus two options construct truth table to show that W 1,3 is a valid sentence rather tedious use inference rules apply some inference rules to sentences already in the knowledge base

136 Action in the Wumpus World additional rules are required to determine actions for the agent RM: A 1,1 East A W 2,1 Forward A RM + 1:...... the agent also needs to Ask the knowledge base what to do must ask specific questions Can I go forward? general questions are not possible in propositional logic Where should I go?

137 Propositional Wumpus Agent the size of the knowledge base even for a small wumpus world becomes immense explicit statements about the state of each square additional statements for actions, time easily reaches thousands of sentences completely unmanageable for humans

138 Exercise: Wumpus World in Propositional Logic express important knowledge about the Wumpus world through sentences in propositional logic format status of the environment percepts of the agent in a specific situation new insights obtained by reasoning rules for the derivation of new sentences new sentences decisions made by the agent actions performed by the agent changes in the environment as a consequence of the actions background general properties of the Wumpus world learning from experience general properties of the Wumpus world

139 Limitations of Propositional Logic number of propositions since everything has to be spelled out explicitly, the number of rules is huge dealing with change (monotonicity) even in very simple worlds, there is change the agents position changes time-dependent propositions and rules can be used even more propositions and rules propositional logic has only one representational device, the proposition difficult to represent objects and relations, properties, functions, variables,...

140 Bridge-In to Predicate Logic limitations of propositional logic in the Wumpus World Large list of statements change proposition as representational device usefulness of objects and relations between them properties functions

141 Formal Languages and Commitments Language Propositional Logicfactstrue, false, unknown First-order Logicfacts, objects, relations true, false, unknown Temporal Logicfacts, objects, relations, times true, false, unknown Probability Theoryfactsdegree of belief [0,1] Fuzzy Logicfacts with degree of truth [0,1] known interval value

142 Commitments in FOL facts same as in propositional logic objects corresponds to entities in the real world (physical objects, concepts) relations connects objects to each other

143 Predicate Logic new concepts complex objects terms relations predicates quantifiers syntax semantics inference rules

144 Examples of Objects, Relations The smelly wumpus occupies square [1,3] objects: wumpus, square 1,3 property: smelly relation: occupies Two plus two equals four objects: two, four relation: equals function: plus

145 Objects distinguishable things in the real world e.g. people, cars, computers, programs,... the set of objects determines the domain of a model frequently includes concepts in contrast to physical objects properties describe specific aspects of objects green, round, heavy, visible, can be used to distinguish between objects

146 Relations establish connections between objects unary relations refer to a single object e.g. mother-of(John), brother-of(Jill), spouse-of(Joe) often called functions binary relations relate two objects to each other e.g. twins(John,Jill), married(Joe, Jane) n-ary relations relate n objects to each other e.g. triplets(Jim, Tim, Wim), seven-dwarfs(D1,..., D7) relations can be defined by the designer or user neighbor, successor, next to, taller than, younger than, … functions are a special type of relation often distinguished from similar binary relations by appending -of e.g. brothers(John, Jim) vs. brother-of(John)

147 Syntax based on sentences more complex than propositional logic constants, predicates, terms, quantifiers constant symbols A, B, C, Franz, Square 1,3, … stand for unique objects ( in a specific context) predicate symbols Adjacent-To, Younger-Than,... describes relations between objects function symbols Father-Of, Square-Position, … the given object is related to exactly one other object

148 Semantics relates sentences to models in order to determine their truth values provided by interpretations for the basic constructs usually suggested by meaningful names (intended interpretations) constants the interpretation identifies the object in the real world predicate symbols the interpretation specifies the particular relation in a model function symbols identifies the object referred to by a tuple of object

149 Grammar Predicate Logic Sentence AtomicSentence | (Sentence Connective Sentence) | Quantifier Variable,... Sentence | Sentence AtomicSentence Predicate(Term, …)| Term = Term Term Function(Term, …)| Constant | Variable Connective | | | Quantifier | Constant A, B, C, X 1, X 2, Jim, Jack Variable a, b, c, x 1, x 2, counter, position Predicate Adjacent-To, Younger-Than, Function Father-Of, Square-Position, Sqrt, Cosine ambiguities are resolved through precedence or parentheses

150 Terms logical expressions that specify objects constants and variables are terms more complex terms are constructed from function symbols and simpler terms, enclosed in parentheses basically a complicated name of an object

151 Atomic Sentences state facts about objects and their relations specified through predicates and terms the predicate identifies the relation, the terms identify the objects that have the relation an atomic sentence is true if the relation between the objects holds

152 Examples Atomic Sentences Father(Jack, John), Mother(Jill, John), Sister(Jane, John) Parents(Jack, Jill, John, Jane) Married(Jack, Jill) Married(Father-Of(John), Mother-Of(John)) Married(Father-Of(John), Mother-Of(Jane)) Married(Parents(Jack, Jill, John, Jane))

153 Complex Sentences logical connectives can be used to build more complex sentences semantics is specified as in propositional logic

154 Examples Complex Sentences Father(Jack, John) Mother(Jill, John) Sister(Jane, John) Sister(John, Jane) Parents(Jack, Jill, John, Jane) Married(Jack, Jill) Older-Than(Jane, John) Older-Than(John, Jane) Older(Father-Of(John), 30) Older (Mother- Of(John), 20)

155 Quantifiers can be used to express properties of collections of objects eliminates the need to explicitly enumerate all objects predicate logic uses two quantifiers universal quantifier existential quantifier

156 Universal Quantification states that a predicate P is holds for all objects x in the universe under discourse x P(x) the sentence is true if and only if all the individual sentences where the variable x is replaced by the individual objects it can stand for are true

157 Example Universal Quantification assume that x denotes the squares in the wumpus world x Is-Empty(x) Contains-Agent(x) Contains- Wumpus(x) is true if and only if all of the following sentences are true: Is-empty(S 11 ) Contains-Agent(S 11 ) Contains-Wumpus(S 11 ) Is-empty(S 12 ) Contains-Agent(S 12 ) Contains-Wumpus(S 12 ) Is-empty(S 13 ) Contains-Agent(S 13 ) Contains-Wumpus(S 13 )... Is-empty(S 21 ) Contains-Agent(S 21 ) Contains-Wumpus(S 21 )... Is-empty(S 44 ) Contains-Agent(S 44 ) Contains-Wumpus(S 44 )

158 Usage of Universal Qualification universal quantification is frequently used to make statements like All humans are mortal, All cats are mammals, All birds can fly, … this can be expressed through sentences like x Human(x) Mortal(x) x Cat(x) Mammal(x) x Bird(x) Can-Fly(x) these sentences are equivalent to the explicit sentence about individuals Human(John) Mortal(John) Human(Jane) Mortal(Jane) Human(Jill) Mortal(Jill)...

159 Existential Quantification states that a predicate P holds for some objects in the universe x P(x) the sentence is true if and only if there is at least one true individual sentence where the variable x is replaced by the individual objects it can stand for

160 Example Existential Quantification assume that x denotes the squares in the wumpus world x Glitter(x) is true if and only if at least one of the following sentences is true: Glitter(S 11 ) Glitter(S 12 ) Glitter(S 13 )... Glitter(S 21 )... Glitter(S 44 )

161 Usage of Existential Qualification existential quantification is used to make statements like Some humans are computer scientists, John has a sister who is a computer scientist Some birds cant fly, … this can be expressed through sentences like x Human(x) Computer-Scientist(x) x Sister(x, John) Computer-Scientist(x) x Bird(x) Can-Fly(x) these sentences are equivalent to the explicit sentence about individuals Human(John) Computer-Scientist(John) Human(Jane) Computer-Scientist(Jane) Human(Jill) Computer-Scientist(Jill)...

162 Multiple Quantifiers more complex sentences can be formulated by multiple variables and by nesting quantifiers the order of quantification is important variables must be introduced by quantifiers, and belong to the innermost quantifier that mention them examples x, y Parent(x,y) Child(y,x) x Human(x) y Mother(y,x) x Human(x) y Loves(x, y) x Human(x) y Loves(x, y) x Human(x) y Loves(y,x)

163 Connections between and all statements made with one quantifier can be converted into equivalent statements with the other quantifier by using negation is a conjunction over all objects under discourse is a disjunction over all objects under discourse De Morgans rules apply to quantified sentences x P(x) x P(x) x P(x) x P(x) x P(x) x P(x) x P(x) x P(x) strictly speaking, only one quantifier is necessary using both is more convenient

164 Equality equality indicates that two terms refer to the same object the equality symbol = is an (in-fix) shorthand e.g. Father(Jane) = Jim

165 Domains a section of the world we want to reason about assertion a sentence added to the knowledge about the domain axiom a statement with basic, factual information about the domain often used as definitions to specify predicates in terms of already defined predicates theorem statement entailed by the axioms it follows logically from the axioms

166 Example: Family Relationships objects: people properties: gender, … expressed as unary predicates Male(x), Female(y) relations: parenthood, brotherhood, marriage expressed through binary predicates Parent(x,y), Brother(x,y), … functions: motherhood, fatherhood Mother(x), Father(y) because every person has exactly one mother and one father there may also be a relation Mother-of(x,y), Father-of(x,y)

167 Family Relationships m,c Mother(c) = m Female(m) Parent(m,c) w,h Husband(h,w) Male(h) Spouse(h,w) x Male(x) Female(x) g,c Grandparent(g,c) p Parent(g,p) Parent(p,c) x,y Sibling(x,y) (x=y) p Parent(p,x) Parent(p,y)...

168 Inference in first-order logic

169 Universal instantiation (UI) Every instantiation of a universally quantified sentence is entailed by it: v α Subst({v/g}, α) for any variable v and ground term g(without any variable) E.g., x King(x) Greedy(x) Evil(x) yields: King(John) Greedy(John) Evil(John) King(Richard) Greedy(Richard) Evil(Richard) King(Father(John)) Greedy(Father(John)) Evil(Father(John)).

170 Existential instantiation (EI) For any sentence α, variable v, and constant symbol k that does not appear elsewhere in the knowledge base: v α Subst({v/k}, α) E.g., x Crown(x) OnHead(x,John) yields: Crown(C 1 ) OnHead(C 1,John) provided C 1 is a new constant symbol, called a Skolem constant

171 Reduction to propositional inference Suppose the KB contains just the following: x King(x) Greedy(x) Evil(x) King(John) Greedy(John) Brother(Richard,John) Instantiating the universal sentence in all possible ways, we have: King(John) Greedy(John) Evil(John) King(Richard) Greedy(Richard) Evil(Richard) King(John) Greedy(John) Brother(Richard,John) The new KB is propositionalized: proposition symbols are King(John), Greedy(John), Evil(John), King(Richard), etc.

172 Reduction contd. Every FOL KB can be propositionalized so as to preserve entailment A ground sentence is entailed by new KB iff entailed by original KB Idea: propositionalize KB and query, apply resolution, return result Problem: with function symbols, there are infinitely many ground terms, e.g., Father(Father(Father(John)))

173 Reduction contd. Theorem: Herbrand (1930). If a sentence α is entailed by an FOL KB, it is entailed by a finite subset of the propositionalized KB Idea: For n = 0 to do create a propositional KB by instantiating with depth-$n$ terms see if α is entailed by this KB Problem: works if α is entailed, loops if α is not entailed Theorem: Turing (1936), Church (1936) Entailment for FOL is semidecidable algorithms exist that say yes to every entailed sentence no algorithm exists that also says no to every nonentailed sentence.

174 Problems with propositionalization Propositionalization seems to generate lots of irrelevant sentences. Example from: x King(x) Greedy(x) Evil(x) King(John) y Greedy(y) Brother(Richard,John) it seems obvious that Evil(John), but propositionalization produces lots of facts such as Greedy(Richard) that are irrelevant With p k-ary predicates and n constants, there are p·n k instantiations.

175 Unification We can get the inference immediately if we can find a substitution θ such that King(x) and Greedy(x) match King(John) and Greedy(y) θ = {x/John,y/John} works Unify(α,β) = θ if αθ = βθ p q θ Knows(John,x) Knows(John,Jane) Knows(John,x)Knows(y,OJ) Knows(John,x) Knows(y,Mother(y)) Knows(John,x)Knows(x,OJ) Standardizing apart eliminates overlap of variables, e.g., Knows(John,z 27 ) Knows(z 17,OJ) {x/Jane} {x/OJ, y/John} {x/Mother(John),y/John} No substitution possible yet.

176 Unification We can get the inference immediately if we can find a substitution θ such that King(x) and Greedy(x) match King(John) and Greedy(y) θ = {x/John,y/John} works Unification finds substitutions that make different logical expressions look identical UNIFY takes two sentences and returns a unifier for them, if one exists UNIFY(p,q) = where SUBST(,p) = SUBST (,q) Basically, find a that makes the two clauses look alike

177 Unification Examples UNIFY(Knows(John,x), Knows(John,Jane)) = {x/Jane} UNIFY(Knows(John,x), Knows(y,Bill)) = {x/Bill, y/John} UNIFY(Knows(John,x), Knows(y,Mother(y))= {y/John, x/Mother(John) UNIFY(Knows(John,x), Knows(x,Elizabeth)) = fail Last example fails because x would have to be both John and Elizabeth We can avoid this problem by standardizing: The two statements now read UNIFY(Knows(John,x), Knows(z,Elizabeth)) This is solvable: UNIFY(Knows(John,x), Knows(z,Elizabeth)) = {x/Elizabeth,z/John}

178 Unification To unify Knows(John,x) and Knows(y,z) Can use θ = {y/John, x/z } Or θ = {y/John, x/John, z/John} The first unifier is more general than the second. There is a single most general unifier (MGU) that is unique up to renaming of variables. MGU = { y/John, x/z }

179 Unification Unification algorithm: Recursively explore the two expressions side by side Build up a unifier along the way Fail if two corresponding points do not match

180 The unification algorithm

181

182 Simple Example Brother(x,John) Father(Henry,y) Mother(z,John) Brother(Richard,x) Father(y,Richard) Mother(Eleanore,x)

183 Generalized Modus Ponens (GMP) p 1 ', p 2 ', …, p n ', ( p 1 p 2 … p n q) qθ p 1 ' is King(John) p 1 is King(x) p 2 ' is Greedy(y) p 2 is Greedy(x) θ is {x/John,y/John} q is Evil(x) q θ is Evil(John) GMP used with KB of definite clauses (exactly one positive literal) All variables assumed universally quantified where p i 'θ = p i θ for all i

184 Soundness of GMP Need to show that p 1 ', …, p n ', (p 1 … p n q) qθ provided that p i 'θ = p i θ for all I Lemma: For any sentence p, we have p pθ by UI 1.(p 1 … p n q) (p 1 … p n q)θ = (p 1 θ … p n θ qθ) 2.p 1 ', \; …, \;p n ' p 1 ' … p n ' p 1 'θ … p n 'θ 3.From 1 and 2, qθ follows by ordinary Modus Ponens

185 Storage and Retrieval Use TELL and ASK to interact with Inference Engine Implemented with STORE and FETCH STORE(s) stores sentence s FETCH(q) returns all unifiers that the query q unifies with Example: q = Knows(John,x) KB is: Knows(John,Jane), Knows(y,Bill), Knows(y,Mother(y)) Result is 1 ={x/Jane}, 2 =, 3 = {John/y,x/Mother(y)}

186 Storage and Retrieval First approach: Create a long list of all propositions in Knowledge Base Attempt unification with all propositions in KB Works, but is inefficient Need to restrict unification attempts to sentences that have some chance of unifying Index facts in KB Predicate Indexing Index predicates: All Knows sentences in one bucket All Loves sentences in another Use Subsumption Lattice (see below)

187 Storing and Retrieval Subsumption Lattice Child is obtained from parent through a single substitution Lattice contains all possible queries that can be unified with it. Works well for small lattices Predicate with n arguments has a 2 n lattice Structure of lattice depends on whether the base contains repeated variables Knows(John,John) Knows(x,John)Knows(x,x)Knows(John,x) Knows(x,y)

188 Forward Chaining Idea: Start with atomic sentences in the KB Apply Modus Ponens Add new atomic sentences until no further inferences can be made Works well for a KB consisting of Situation Response clauses when processing newly arrived data

189 Forward Chaining First Order Definite Clauses Disjunctions of literals of which exactly one is positive: Example: King(x) Greedy(x) Evil(x) King(John) Greedy(y) First Order Definite Clauses can include variables Variables are assumed to be universally quantified Greedy(y) means y Greedy(y) Not every KB can be converted into first definite clauses

190 Example knowledge base The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American. Prove that Col. West is a criminal

191 Example knowledge base contd.... it is a crime for an American to sell weapons to hostile nations: American(x) Weapon(y) Sells(x,y,z) Hostile(z) Criminal(x) Nono … has some missiles, i.e., x Owns(Nono,x) Missile(x): Owns(Nono,M 1 ) and Missile(M 1 ) … all of its missiles were sold to it by Colonel West Missile(x) Owns(Nono,x) Sells(West,x,Nono) Missiles are weapons: Missile(x) Weapon(x) An enemy of America counts as "hostile: Enemy(x,America) Hostile(x) West, who is American … American(West) The country Nono, an enemy of America … Enemy(Nono,America)

192 Forward chaining algorithm

193 Forward chaining proof

194

195

196 Properties of forward chaining Sound and complete for first-order definite clauses Datalog = first-order definite clauses + no functions FC terminates for Datalog in finite number of iterations May not terminate in general if α is not entailed This is unavoidable: entailment with definite clauses is semidecidable

197 Efficiency of forward chaining Incremental forward chaining: no need to match a rule on iteration k if a premise wasn't added on iteration k-1 match each rule whose premise contains a newly added positive literal Matching itself can be expensive: Database indexing allows O(1) retrieval of known facts e.g., query Missile(x) retrieves Missile(M 1 ) Forward chaining is widely used in deductive databases

198 Hard matching example Colorable() is inferred iff the CSP has a solution CSPs include 3SAT as a special case, hence matching is NP-hard Diff(wa,nt) Diff(wa,sa) Diff(nt,q) Diff(nt,sa) Diff(q,nsw) Diff(q,sa) Diff(nsw,v) Diff(nsw,sa) Diff(v,sa) Colorable() Diff(Red,Blue) Diff (Red,Green) Diff(Green,Red) Diff(Green,Blue) Diff(Blue,Red) Diff(Blue,Green)

199 Backward Chaining Improves on Forward Chaining by not making irrelevant conclusions Alternatives to backward chaining: restrict forward chaining to a relevant set of forward rules Rewrite rules so that only relevant variable bindings are made: Use a magic set Example: Rewrite rule: Magic(x) American(x) Weapon(x) Sells(x,y,z) Hostile(z) Criminal(x) Add fact: Magic(West)

200 Backward Chaining Idea: Given a query, find all substitutions that satisfy the query. Algorithm: Work on lists of goals, starting with original query Algo finds every clause in the KB that unifies with the positive literal (head) and adds remainder (body) to list of goals

201 Backward chaining algorithm SUBST(COMPOSE(θ 1, θ 2 ), p) = SUBST(θ 2, SUBST(θ 1, p))

202 Backward chaining example

203

204

205

206

207

208

209

210 Properties of backward chaining Depth-first recursive proof search: space is linear in size of proof Incomplete due to infinite loops fix by checking current goal against every goal on stack Inefficient due to repeated subgoals (both success and failure) fix using caching of previous results (extra space) Widely used for logic programming

211 Logic programming: Prolog Algorithm = Logic + Control Basis: backward chaining with Horn clauses + bells & whistles Widely used in Europe, Japan (basis of 5th Generation project) Program = set of clauses: head :- literal 1, … literal n. criminal(X) :- american(X), weapon(Y), sells(X,Y,Z), hostile(Z). Depth-first, left-to-right backward chaining Built-in predicates for arithmetic etc., e.g., X is Y*Z+3 Built-in predicates that have side effects (e.g., input and output predicates, assert/retract predicates) Closed-world assumption ("negation as failure") e.g., given alive(X) :- not dead(X). alive(joe) succeeds if dead(joe) fails

212 Prolog Appending two lists to produce a third: append([],Y,Y). append([X|L],Y,[X|Z]) :- append(L,Y,Z). query: append(A,B,[1,2]) ? answers: A=[] B=[1,2] A=[1] B=[2] A=[1,2] B=[]

213 Prolog Has problems with repeated states and infinite paths Example: Path finding in graphs path(X,Z) :- link(X,Z) path(X,Z) :- path(X,Y),link(Y,Z) ABC path(a,c) link(a,c) fail link(Y,c) path(a,Y) { Y/b} link(a,b) {Y/b }

214 Prolog Has problems with repeated states and infinite paths Example: Path finding in graphs path(X,Z) :- path(X,Y),link(Y,Z) path(X,Z) :- link(X,Z) ABC path(a,c) path(a,Y) fail link(Y,b) path(a,Y)link(Y,Y) path(a,Y)link(Y,b)

215 Resolution Resolution for propositional logic is a complete inference procedure Existence of complete proof procedures in Mathematics would entail: All conjectures can be established mechanically All mathematics can be established as the logical consequence of a set of fundamental axioms Gődel 1930: Completeness Theorem for first order logic Any entailed sentence has a finite proof No algorithm given until J.A. Robinsons resolution algorithm in 1965 Gődel 1931: Incompleteness Theorem: Any logical system with induction is necessarily incomplete There are sentences that are entailed, but not proof can be given

216 Resolution First order logic requires sentences in CNF Conjunctive Normal Form: Each clause is a disjunction of literals, but literals can contain variables, which are assumed to be universally quantified Example: Convert x American(x) Weapon(y) Sells(x,y,z) Hostile(z) Criminal(x) American(x) Weapon(y) Sells(x,y,z) Hostile(z) Criminal(x)

217 Conversion to CNF Everyone who loves all animals is loved by someone: x [ y Animal(y) Loves(x,y)] [ y Loves(y,x)] 1. Eliminate biconditionals and implications x [ y Animal(y) Loves(x,y)] [ y Loves(y,x)] 2. Move inwards: x p x p, x p x p x [ y ( Animal(y) Loves(x,y))] [ y Loves(y,x)] x [ y Animal(y) Loves(x,y)] [ y Loves(y,x)]

218 Conversion to CNF 3.Standardize variables: each quantifier should use a different one x [ y Animal(y) Loves(x,y)] [ z Loves(z,x)] 4.Skolemize: a more general form of existential instantiation. Each existential variable is replaced by a Skolem function of the enclosing universally quantified variables: x [Animal(F(x)) Loves(x,F(x))] Loves(G(x),x) 5.Drop universal quantifiers: [Animal(F(x)) Loves(x,F(x))] Loves(G(x),x) 6.Distribute over : [Animal(F(x)) Loves(G(x),x)] [ Loves(x,F(x)) Loves(G(x),x)]

219 Resolution Inference Rule Full first-order version: l 1 ··· l k, m 1 ··· m n Subst(θ,l 1 ··· l i-1 l i+1 ··· l k m 1 ··· m j-1 m j+1 ··· m n )θ where Unify(l i, m j ) = θ. The two clauses are assumed to be standardized apart so that they share no variables. For example, Rich(x) Unhappy(x) Rich(Ken) Unhappy(Ken) with θ = {x/Ken} Apply resolution steps to CNF(KB α); complete for FOL

220 Resolution Show KB α by showing that KB α is unsatisfyable

221 Resolution Example Everyone who loves all animals is loved by someone Anyone who kills an animal is loved by no one. Jack loves all animals. Either Jack or Curiosity killed the cat, who is named Tuna All dogs kill a cats Rintintin is a dog Question: Did Curiosity kill the cat?

222 Resolution Example Everyone who loves all animals is loved by someone. Formulate in FOL x [ y [Animal(y) Loves(x,y)]] [ z Loves(z,x)] Remove Implications x [ [ y Animal(y) Loves(x,y)]] [ z Loves(z,x)] Move negation inward x [ y Animal(y) Loves(x,y)] [ z Loves(z,x)] Skolemize x [ Animal(F(x)) Loves(x,F(x))] [Loves(G(x),x)] N.B.: Argument of Skolemization function are all universally qualified variables Drop universal quantifier [ Animal(F(x)) Loves(x,F(x))] [Loves(G(x),x)] Use distributive law (and get two clauses) Animal(F(x)) Loves(G(x),x); Loves(x,F(x)) Loves(G(x),x)

223 Resolution Example Anyone who kills an animal is loved by no one. Transfer to FOL x [ y (Animal(y) Kills(x,y)] ( z Loves(z,x) Remove Implications x [ y (Animal(y) Kills(x,y)] ( z Loves(z,x) Move negations inwards x [ y Animal(y) Kills(x,y)] ( z Loves(z,x)) Remove quantifiers Animal(y) Kills(x,y) Loves(z,x)

224 Resolution Example Jack loves all animals. FOL form x [Animal(x) Loves(Jack, x)] Remove implications x [ Animal(x) Loves(Jack, x)] Remove quantifier Animal(x) Loves(Jack, x)

225 Resolution Example Either Jack or Curiosity killed the cat, who is named Tuna. FOL form Kills(Jack,Tuna) Kills(Curiosity,Tuna); Cat(Tuna)

226 Resolution Example All cats are animals FOL form x [Cat(x) Animal(x) Remove implications x [ Cat(x) Animal(x)] Remove quantifier Cat(x) Animal(x)

227 Resolution Example All dogs kill a cats FOL form x [Dog(x) y[Cat(y) Kills(x,y)]] Remove implications x [ Dog(x) y[Cat(y) Kills(x,y)]] Skolemize x [ Dog(x) [Cat(H(x)) Kills(x,H(x))]] Drop universal quantifiers Dog(x) [Cat(H(x)) Kills(x,H(x))] Distribute (and obtain two clauses) Dog(x) Cat(H(x); Dog(x) Kills(x,H(x))]

228 Resolution Example Rintintin is a dog FOL form Dog(Rintintin)

229 Resolution Example Animal(F(x)) Loves(G(x),x) Loves(x,F(x)) Loves(G(x),x) Animal(y) Kills(x,y) Loves(z,x) Animal(x) Loves(Jack, x) Kills(Jack,Tuna) Kills(Curiosity,Tuna) Cat(Tuna) Cat(x) Animal(x) Dog(x) Cat(H(x) Dog(x) Kills(x,H(x))] Dog(Rintintin)

230 Resolution Example Animal(F(x)) Loves(G(x),x) Loves(x,F(x)) Loves(G(x),x) Animal(y) Kills(x,y) Loves(z,x) Animal(x) Loves(Jack, x) Kills(Jack,Tuna) Kills(Curiosity,Tuna) Cat(Tuna) Cat(x) Animal(x) Dog(x) Cat(H(x) Dog(x) Kills(x,H(x))] Dog(Rintintin) Question: Did Curiosity kill the cat? Kills(Curiosity,Tuna)]

231 Resolution Example Animal(F(x)) Loves(G(x),x) Loves(x,F(x)) Loves(G(x),x) Animal(y) Kills(x,y) Loves(z,x) Animal(x) Loves(Jack, x) Kills(Jack,Tuna) Kills(Curiosity,Tuna) Cat(Tuna) Cat(x) Animal(x) Dog(x) Cat(H(x) Dog(x) Kills(x,H(x))] Dog(Rintintin) Kills(Curiosity,Tuna)] Cat(Tuna), Cat(x) Animal(x) Unify(Cat(Tuna), Cat(x)) = { x/Tuna } First line thus resolves to: Animal(Tuna)

232 Resolution Example Animal(F(x)) Loves(G(x),x) Loves(x,F(x)) Loves(G(x),x) Animal(y) Kills(x,y) Loves(z,x) Animal(x) Loves(Jack, x) Kills(Jack,Tuna) Kills(Curiosity,Tuna) Cat(Tuna) Cat(x) Animal(x) Dog(x) Cat(H(x) Dog(x) Kills(x,H(x))] Dog(Rintintin) Kills(Curiosity,Tuna) Animal(Tuna) Kills(Jack,Tuna) Kills(Curiosity,Tuna), Kills(Curiosity,Tuna) Resolves to: Kills(Jack,Tuna)

233 Resolution Example Animal(F(x)) Loves(G(x),x) Loves(x,F(x)) Loves(G(x),x) Animal(y) Kills(x,y) Loves(z,x) Animal(x) Loves(Jack, x) Kills(Jack,Tuna) Kills(Curiosity,Tuna) Cat(Tuna) Cat(x) Animal(x) Dog(x) Cat(H(x) Dog(x) Kills(x,H(x))] Dog(Rintintin) Kills(Curiosity,Tuna) Animal(Tuna) Kills(Jack,Tuna) Animal(y) Kills(x,y) Loves(z,x), Animal(Tuna) Unify(Animal(Tuna), Animal(y)) = {y/Tuna} Resolves to: Kills(x,Tuna) Loves(z,x),

234 Resolution Example Animal(F(x)) Loves(G(x),x) Loves(x,F(x)) Loves(G(x),x) Animal(y) Kills(x,y) Loves(z,x) Animal(x) Loves(Jack, x) Kills(Jack,Tuna) Kills(Curiosity,Tuna) Cat(Tuna) Cat(x) Animal(x) Dog(x) Cat(H(x) Dog(x) Kills(x,H(x))] Dog(Rintintin) Kills(Curiosity,Tuna) Animal(Tuna) Kills(Jack,Tuna) Kills(x,Tuna) Loves(z,x), Loves(x,F(x)) Loves(G(x),x), Animal(z) Loves(Jack, z) Unify( Loves(x,F(x)), Loves(Jack, z)) = { x / Jack, z / F(x)} Resolvent clause is obtained by substituting the unification rule Loves(G(Jack),Jack) Animal(F(Jack))

235 Resolution Example Animal(F(x)) Loves(G(x),x) Loves(x,F(x)) Loves(G(x),x) Animal(y) Kills(x,y) Loves(z,x) Animal(x) Loves(Jack, x) Kills(Jack,Tuna) Kills(Curiosity,Tuna) Cat(Tuna) Cat(x) Animal(x) Dog(x) Cat(H(x) Dog(x) Kills(x,H(x))] Dog(Rintintin) Kills(Curiosity,Tuna) Animal(Tuna) Kills(Jack,Tuna) Kills(x,Tuna) Loves(z,x) Loves(G(Jack),Jack) Animal(F(Jack)) Animal(F(x)) Loves(G(x),x), Loves(G(Jack),Jack) Animal(F(Jack)) Unify(Animal(F(x)), Animal(F(Jack)))= { x / Jack} Resolvent clause is obtained by substituting the unification rule Loves(G(Jack),Jack)

236 Resolution Example Animal(F(x)) Loves(G(x),x) Loves(x,F(x)) Loves(G(x),x) Animal(y) Kills(x,y) Loves(z,x) Animal(x) Loves(Jack, x) Kills(Jack,Tuna) Kills(Curiosity,Tuna) Cat(Tuna) Cat(x) Animal(x) Dog(x) Cat(H(x) Dog(x) Kills(x,H(x))] Dog(Rintintin) Kills(Curiosity,Tuna) Animal(Tuna) Kills(Jack,Tuna) Kills(x,Tuna) Loves(z,x) Loves(G(Jack),Jack) Animal(F(Jack)) Loves(G(Jack),Jack) Kills(x,Tuna) Loves(z,x), Loves(G(Jack),Jack) Unify( Loves(z,x), Loves(G(Jack),Jack) ) = { x / Jack, z / G(Jack)} Resolvent clause is obtained by substituting the unification rule Loves(G(Jack),Jack)

237 Resolution Example Animal(F(x)) Loves(G(x),x) Loves(x,F(x)) Loves(G(x),x) Animal(y) Kills(x,y) Loves(z,x) Animal(x) Loves(Jack, x) Kills(Jack,Tuna) Kills(Curiosity,Tuna) Cat(Tuna) Cat(x) Animal(x) Dog(x) Cat(H(x) Dog(x) Kills(x,H(x))] Dog(Rintintin) Kills(Curiosity,Tuna) Animal(Tuna) Kills(Jack,Tuna) Kills(x,Tuna) Loves(z,x) Loves(G(Jack),Jack) Animal(F(Jack)) Loves(G(Jack),Jack) Loves(G(Jack),Jack), Loves(G(Jack),Jack) Resolvent clause is empty. Proof succeeded

238 Resolution Resolution is refutation-complete If a set of sentences is unsatisfiable, then resolution will be able to produce a contradiction Theorem provers Use control in order to be more efficient Focus of most research effort Separate control from knowledge base Example: Otter (Organized Technique for Theorem proving and Effective Research) A set of clauses known as the SoS - Set of Support The important facts about a problem Search if focused on resolving SoS with another axiom A set of usable axioms Background knowledge about problem field Rewrites / demodulators Rules to transform expressions into a canonical form Set of parameters or clauses that defines the control strategy to allow user to control search and filtering functions to eliminate useless subgoals

239 Theorem Prover Successes First formal proof of Gődels Incompleteness Theorem (1986) Robbins algebra (a simple set of axioms) is Boolean algebra (1996) Software verification: Remote agent spacecraft control program (2000)

240 Resolution proof: definite clauses


Download ppt "Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types."

Similar presentations


Ads by Google