ITCS 3153 Artificial Intelligence Lecture 12 Logical Agents Chapter 7 Lecture 12 Logical Agents Chapter 7.

Slides:



Advertisements
Similar presentations
Computer Science CPSC 322 Lecture 25 Top Down Proof Procedure (Ch 5.2.2)
Advertisements

Russell and Norvig Chapter 7
Inference Rules Universal Instantiation Existential Generalization
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
UIUC CS 497: Section EA Lecture #2 Reasoning in Artificial Intelligence Professor: Eyal Amir Spring Semester 2004.
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Methods of Proof Chapter 7, Part II. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation.
Logic CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
CPSC 422, Lecture 21Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 21 Mar, 4, 2015 Slide credit: some slides adapted from Stuart.
Generating Hard Satisfiability Problems1 Bart Selman, David Mitchell, Hector J. Levesque Presented by Xiaoxin Yin.
Proof methods Proof methods divide into (roughly) two kinds: –Application of inference rules Legitimate (sound) generation of new sentences from old Proof.
Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 9 Jim Martin.
Methods of Proof Chapter 7, second half.
Knoweldge Representation & Reasoning
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
1 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: Satisfiability (Reading R&N: Chapter 7)
Artificial Intelligence Chapter 14 Resolution in the Propositional Calculus Artificial Intelligence Chapter 14 Resolution in the Propositional Calculus.
Tell the robot exactly how to draw a square on the board.
Notes for Chapter 12 Logic Programming The AI War Basic Concepts of Logic Programming Prolog Review questions.
CS 416 Artificial Intelligence Lecture 10 Logic Chapters 7 and 8 Lecture 10 Logic Chapters 7 and 8.
CHAPTERS 7, 8 Oliver Schulte Logical Inference: Through Proof to Truth.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 3 Logic Representations (Part 2)
Knowledge Representation Use of logic. Artificial agents need Knowledge and reasoning power Can combine GK with current percepts Build up KB incrementally.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
1 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Propositional Logic: Methods of Proof (Part II) This lecture topic: Propositional Logic (two lectures) Chapter (previous lecture, Part I) Chapter.
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
S P Vimal, Department of CSIS, BITS, Pilani
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module Logic Representations.
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
1 Logical Agents Chapter 7. 2 A simple knowledge-based agent The agent must be able to: –Represent states, actions, etc. –Incorporate new percepts –Update.
Automated Reasoning Early AI explored how to automated several reasoning tasks – these were solved by what we might call weak problem solving methods as.
1 Logical Inference Algorithms CS 171/271 (Chapter 7, continued) Some text and images in these slides were drawn from Russel & Norvig’s published material.
A DPLL example C1:(a  b) C2:(  a   b) C3:(a   c) C4:(c  d  e) C5:(d   e) C6:(  d   f) C7:(f  e) C8:(  f   e) false true a a a=false by.
1 The Wumpus Game StenchBreeze Stench Gold Breeze StenchBreeze Start  Breeze.
CPSC 422, Lecture 21Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 21 Oct, 30, 2015 Slide credit: some slides adapted from Stuart.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
CS 416 Artificial Intelligence Lecture 10 Reasoning in Propositional Logic Chapters 7 and 8 Lecture 10 Reasoning in Propositional Logic Chapters 7 and.
David Evans CS200: Computer Science University of Virginia Computer Science Lecture 15: Intractable Problems (Smiley.
CS2351 Artificial Intelligence Bhaskar.V Class Notes on Knowledge Representation - Logical Agents.
Reasoning with Propositional Logic automated processing of a simple knowledge base CD.
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
Computing & Information Sciences Kansas State University Wednesday, 13 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 10 of 42 Wednesday, 13 September.
1 Logical Agents Chapter 7. 2 Logical Agents What are we talking about, “logical?” –Aren’t search-based chess programs logical Yes, but knowledge is used.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Inference in Propositional Logic (and Intro to SAT) CSE 473.
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
Knowledge Repn. & Reasoning Lecture #9: Propositional Logic UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2005.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
David Evans CS200: Computer Science University of Virginia Computer Science Lecture 23: Intractable Problems (Smiley Puzzles.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
EA C461 Artificial Intelligence
Inference in Propositional Logic (and Intro to SAT)
Logical Inference 1 introduction
EA C461 – Artificial Intelligence Logical Agent
ARTIFICIAL INTELLIGENCE
First-Order Logic and Inductive Logic Programming
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 21
CSCI 5582 Artificial Intelligence
Logical Agents Reading: Russell’s Chapter 7
CS 416 Artificial Intelligence
CSE 4705 Artificial Intelligence
Biointelligence Lab School of Computer Sci. & Eng.
Artificial Intelligence: Agents and Propositional Logic.
CS 416 Artificial Intelligence
Artificial Intelligence
Methods of Proof Chapter 7, second half.
Presentation transcript:

ITCS 3153 Artificial Intelligence Lecture 12 Logical Agents Chapter 7 Lecture 12 Logical Agents Chapter 7

Propositional Logic We’re still emphasizing Propositional Logic Very important question with this method Does knowledge base of propositional logic satisfy a particular proposition?Does knowledge base of propositional logic satisfy a particular proposition? –Can we generate some sequence of resolutions that prove a proposition is possible? We’re still emphasizing Propositional Logic Very important question with this method Does knowledge base of propositional logic satisfy a particular proposition?Does knowledge base of propositional logic satisfy a particular proposition? –Can we generate some sequence of resolutions that prove a proposition is possible?

Backtracking Another way (representation) for searching Problem to solve is in CNFProblem to solve is in CNF –Is Marvin a Martian given --- M == 1 (true)?  Marvin is green --- G=1  Marvin is little --- L=1  (little and green) implies Martian --- (L ^ G) => M ~(L^G) V M ~L V ~G V M = 1 –Proof by contradiction… are there true/false values for G, L, and M that are consistent with knowledge base and Marvin not being a Martian?  G ^ L ^ (~L V ~G V M) ^ ~M == 0? Another way (representation) for searching Problem to solve is in CNFProblem to solve is in CNF –Is Marvin a Martian given --- M == 1 (true)?  Marvin is green --- G=1  Marvin is little --- L=1  (little and green) implies Martian --- (L ^ G) => M ~(L^G) V M ~L V ~G V M = 1 –Proof by contradiction… are there true/false values for G, L, and M that are consistent with knowledge base and Marvin not being a Martian?  G ^ L ^ (~L V ~G V M) ^ ~M == 0? Make sure you understand this logic

Searching for variable values Want to find values such that: Randomly consider all true/false assignments to variables until we exhaust them all or find matchRandomly consider all true/false assignments to variables until we exhaust them all or find match –(G, L, M) = (1, 0, 0)… no = (0, 1, 0)… no = (0, 0, 0)… no = (1, 1, 0)… no Alternatively…Alternatively… Want to find values such that: Randomly consider all true/false assignments to variables until we exhaust them all or find matchRandomly consider all true/false assignments to variables until we exhaust them all or find match –(G, L, M) = (1, 0, 0)… no = (0, 1, 0)… no = (0, 0, 0)… no = (1, 1, 0)… no Alternatively…Alternatively… G ^ L ^ (~L V ~G V M) ^ ~M == 0

Searching for variable values Davis-Putnam Algorithm (DPLL) Search through possible assignments to (G, L, M) via depth first search (0, 0, 0) to (0, 0, 1) to (0, 1, 0)…Search through possible assignments to (G, L, M) via depth first search (0, 0, 0) to (0, 0, 1) to (0, 1, 0)… –Each clause of CNF must be true  Terminate consideration when clause evaluates to false –Use heuristics to reduce repeated computation of propositions  Early termination  Pure symbol heuristic  Unit clause heuristic Davis-Putnam Algorithm (DPLL) Search through possible assignments to (G, L, M) via depth first search (0, 0, 0) to (0, 0, 1) to (0, 1, 0)…Search through possible assignments to (G, L, M) via depth first search (0, 0, 0) to (0, 0, 1) to (0, 1, 0)… –Each clause of CNF must be true  Terminate consideration when clause evaluates to false –Use heuristics to reduce repeated computation of propositions  Early termination  Pure symbol heuristic  Unit clause heuristic G L M

Searching for variable values Other ways to find (G, L, M) assignments for: G ^ L ^ (~L V ~G V M) ^ ~M == 0 G ^ L ^ (~L V ~G V M) ^ ~M == 0 Simulated Annealing (WalkSAT) – –Start with initial guess (0, 1, 1) – –Evaluation metric is the number of clauses that evaluate to true – –Move “in direction” of guesses that cause more clauses to be true – –Many local mins, use lots of randomness Other ways to find (G, L, M) assignments for: G ^ L ^ (~L V ~G V M) ^ ~M == 0 G ^ L ^ (~L V ~G V M) ^ ~M == 0 Simulated Annealing (WalkSAT) – –Start with initial guess (0, 1, 1) – –Evaluation metric is the number of clauses that evaluate to true – –Move “in direction” of guesses that cause more clauses to be true – –Many local mins, use lots of randomness

WalkSAT termination How do you know when simulated annealing is done? No way to know with certainty that an answer is not possibleNo way to know with certainty that an answer is not possible –Could have been bad luck –Could be there really is no answer –Establish a max number of iterations and go with best answer to that point How do you know when simulated annealing is done? No way to know with certainty that an answer is not possibleNo way to know with certainty that an answer is not possible –Could have been bad luck –Could be there really is no answer –Establish a max number of iterations and go with best answer to that point

So how well do these work? Think about it this way 16 of 32 possible assignments are models (are satisfiable) for this sentence16 of 32 possible assignments are models (are satisfiable) for this sentence Therefore, 2 random guesses should find a solutionTherefore, 2 random guesses should find a solution WalkSAT and DPLL should work quicklyWalkSAT and DPLL should work quickly Think about it this way 16 of 32 possible assignments are models (are satisfiable) for this sentence16 of 32 possible assignments are models (are satisfiable) for this sentence Therefore, 2 random guesses should find a solutionTherefore, 2 random guesses should find a solution WalkSAT and DPLL should work quicklyWalkSAT and DPLL should work quickly

What about more clauses? If # symbols (variables) stays the same, but number of clauses increases More ways for an assignment to fail (on any one clause)More ways for an assignment to fail (on any one clause) More searching through possible assignments is neededMore searching through possible assignments is needed Let’s create a ratio, m/n, to measure #clauses / # symbolsLet’s create a ratio, m/n, to measure #clauses / # symbols We expect large m/n causes slower solutionWe expect large m/n causes slower solution If # symbols (variables) stays the same, but number of clauses increases More ways for an assignment to fail (on any one clause)More ways for an assignment to fail (on any one clause) More searching through possible assignments is neededMore searching through possible assignments is needed Let’s create a ratio, m/n, to measure #clauses / # symbolsLet’s create a ratio, m/n, to measure #clauses / # symbols We expect large m/n causes slower solutionWe expect large m/n causes slower solution

What about more clauses Higher m/n means fewer assignments will work If fewer assignments work it is harder for DPLL and WalkSAT If fewer assignments work it is harder for DPLL and WalkSAT Higher m/n means fewer assignments will work If fewer assignments work it is harder for DPLL and WalkSAT If fewer assignments work it is harder for DPLL and WalkSAT

Combining it all 4x4 Wumpus World The “physics” of the gameThe “physics” of the game – – At least one wumpus on boardAt least one wumpus on board – A most one wumpus on board (for any two squares, one is free)A most one wumpus on board (for any two squares, one is free) – n(n-1)/2 rules like: ~W 1,1 V ~W 1,2 Total of 155 sentences containing 64 distinct symbolsTotal of 155 sentences containing 64 distinct symbols 4x4 Wumpus World The “physics” of the gameThe “physics” of the game – – At least one wumpus on boardAt least one wumpus on board – A most one wumpus on board (for any two squares, one is free)A most one wumpus on board (for any two squares, one is free) – n(n-1)/2 rules like: ~W 1,1 V ~W 1,2 Total of 155 sentences containing 64 distinct symbolsTotal of 155 sentences containing 64 distinct symbols

Wumpus World Inefficiencies as world becomes large Knowledge base must expand if size of world expandsKnowledge base must expand if size of world expands Preferred to have sentences that apply to all squares We brought this subject up last weekWe brought this subject up last week Next chapter addresses this issueNext chapter addresses this issue Inefficiencies as world becomes large Knowledge base must expand if size of world expandsKnowledge base must expand if size of world expands Preferred to have sentences that apply to all squares We brought this subject up last weekWe brought this subject up last week Next chapter addresses this issueNext chapter addresses this issue