Presentation is loading. Please wait.

Presentation is loading. Please wait.

Knowledge-based Agents (KBAs)

Similar presentations


Presentation on theme: "Knowledge-based Agents (KBAs)"— Presentation transcript:

1 Knowledge-based Agents (KBAs)
KBAs keep track of the world by means of an internal state which is constantly updated by the agent itself. A KBA is an agent which: has knowledge about the world it is functioning in given and incomplete specification of the current state, can derive unseen (hidden) properties of the world; knows how the world evolves over time; Knows what it wants to achieve; can reason about its possible courses of actions. To build KBAs, we must decide on: how to represent the agent’s knowledge, i.e. to address the so-called knowledge representation problem. how to carry out the agent’s reasoning.

2 Knowledge-based agent vs conventional computer program from a design point of view
KBA Identify knowledge needed to solve the problem. Select a representation framework in which this knowledge can be expressed. Represent knowledge in the selected framework. Run the problem, i.e. apply the reasoning mechanism of the selected logic to infer all possible consequences of initial knowledge. Computer program Design an algorithm to solve the problem. Decide on a programming language to encode the algorithm. Write a program. Run the program.

3 Basic architecture of a knowledge-based agent.
Adding new facts Returning actions (TELL function) (ASK function) Fact 1 Fact 2 .... Fact n represented as sentences in some KR language Knowledge base (domain specific) Rule 1 Rule 2 ..... Rule k define what follows from the facts in the KB Inference engine (domain independent) Knowledge-based agent

4 Building the KB: declarative vs evolutionary approach
Declarative approach: building the initial knowledge base (“background knowledge”) is part of the design process. This knowledge reflects the designer’s knowledge about the agent’s world. The agent will add new facts as it perceives the world or reasons about the world by means of its own inference engine. Evolutionary approach: the initial knowledge base is empty, but the agent possesses a learning capability by means of which it will gradually build its background knowledge. Such an agent can be fully autonomous, but research suggests that this process will be very inefficient. The combination of declarative and evolutionary approaches may be a good compromise: the agent is provided with some background knowledge, which it gradually expands and refines by means of its learning capability.

5 KBAs can be described at three different levels depending on the particular aspect of their design that we want to concentrate on. Knowledge (epistemological) level: defines an agent in terms of the knowledge that it possesses. Logical level: defines how agent knowledge in encoded into formulas from the selected knowledge representation language. Implementation level: defines how agent knowledge is stated in the selected implementation language (for example, in LISP). Example: Wumpus World. This is a grid of squares surrounded by walls, where each square may contain gold that the agent is hunting for, and a pit which is deadly for the agent - thus, it wants to avoid it. A Wumpus leaves in this world, and the agent does not want to encounter him, because this might turn deadly for the agent. The agent has some knowledge about this world that may allow it to get to the gold, grab it and safely escape.

6 Wumpus world characterization (review pages 41 – 46)
Partially observable – only current location is perceived. Deterministic – outcomes of agent’s actions are fully specified. Static – nothing affects the world except for the agent. Not episodic – sequential at the level of actions (no percepts are revised) Discrete – time is associated with agent’s actions. Single agent.

7 The Wumpus World (WW) represented at the knowledge level (as shown on Fig. 7.2 AIMA)
This is a static world with only one Wumpus, one gold, and three pits. Percepts: Stench, meaning that the Wumpus is either in the same square where the agent is, or in one of the directly adjacent squares; Breeze, meaning that there is a pit in the squares directly adjacent to the one where the agent currently is; Glitter, meaning that the gold is the square where the agent is; Scream, meaning that the Wumpus was killed; Bump, meaning that the agent bumps into a wall. Actions: Go forward to the next square; Turn left 90 degrees and Turn right 90 degrees; Grad the gold; Shoot the Wumpus, this action can be performed only once; Climb the wall, which can happen only in square [1,1] to get out of the cave. Goals: Get to the gold, grab it, return back to square [1,1] and get out of the cave. Stay alive.

8 The role of reasoning in the WW: why a search-based agent cannot handle it?
Figures 7.3 and 7.4 represent an instance of the WW at the logical level. Initial state [1,1] can be characterized as follows: percept: [None, None, None, None, None]. Following the rules of the game, the agent can conclude that [1,2] and [2,1] are save, i.e. the following two new facts will be added to the KB: not W[1,2], not P[1,2], not W[2,1], not P[2,1]. possible actions: go forward, turn left and go forward. Assume that the agent has (arbitrary) decided to go forward, thus moving to Next state [2,1]: percept: [None, Breeze, None, None, None]. Following the rules of the game, the agent can conclude that either [2,2] or [3,1] contain a pit, i.e. the following new fact will be added to the KB: P[2,2] v P[3,1]. Also, not W[2,2] and not W[3,1]. possible actions: return back to [1,1] because the agent remembers that there is another safe alternative there that it has not explored yet.

9 Next state [1,1]: percept: [None, None, None, None, None]. possible actions: go to [1,2] or return to [2,1]. Assuming that the agent is smart enough to avoid loops, it will choose this time to go [1,2]. Next state [1,2]: percept: [Stench, None, None, None, None]. The agent concludes that the wumpus is either in [1,3] or [2,2]. But since no stench was felt in [2,1], the wumpus cannot be in [2,2], and therefore it must be in [1,3] (W[1,3] is added to the KB). Also, the agent concludes that there is no pit in [2,2], and therefore the pit was at [3,1] (not P[2,2], P[3,1] are added to the KB). possible actions: turn right and go forward. Next state [2,2]: percept: [None, None, None, None, None]. The agent concludes that [2,3] and [3,2] are safe (not P[3,2], not P[2,3], not W[3,2], not W[2,3] are added to the KB). possible actions: turn left and go forward, and go forward. Assume that the agent has (arbitrary) decided to turn left and go forward. Next state [2,3]: percept: [Stench, Breeze, Glitter, None, None]. The agent concludes that it has reached one of his goal (finding the gold), now it must grab it, and safely get back to [1,1] where the exit from the cave is. actions that follow next are trivial because the agent must remember its path, and it just goes backwards.

10 Knowledge representation: expressing knowledge in a form understandable by a computer.
Choosing an appropriate language to represent knowledge is the first and the most important step in building an intelligent agent. Each language has 2 sides: Syntax: alphabet and rules how to build sentences (formulas). Semantics: defines the meaning and the truth value of sentences by connecting them to the facts in the outside world. If the syntax and the semantics of a language are precisely defined, we call that language a logic, i.e. Logic = Syntax + Semantics

11 Connection between sentences in the KR language and facts in the outside world
Internal Entails Representation Sentences Sentences Outside world Facts Facts Follows There must exist an exact correspondence between the sentences entailed by the agent’s logic and the facts that follow from other facts in the outside world. If this requirement does not hold, our agent will be unpredictable and irrational.

12 Entailment and inference
Entailment defines if sentence A is true with respect to a given KB (written as KB |== A), while inference defines whether sentence A can be derived from the KB (written as KB |-- A). Assume that the agent’s KB contains only true sentences comprising his explicit knowledge about the world. To find out all consequences (“implicit” knowledge) that follow from what the agent already knows, he can “run” an inference procedure. If this inference procedure generates only entailed sentences, then it is sound; if the inference procedure generates all entailed sentences it is complete. Ideally, we want to provide our agent with a sound and complete

13 Proof theory: defines sound reasoning, i. e
Proof theory: defines sound reasoning, i.e. reasoning performed by a sound inference procedure Assume that A is derivable from a given KB by means of inference procedure i (written as KB |--i A). If i is sound, the derivation process is called a proof of A. Note that we do not require inference procedure i to be complete; in some cases this is impossible (for example, if the KB is infinite). Therefore, we have no guarantee that a proof for A will be found even if KB |== A.

14 Knowledge representation languages: why do we need something different than programming languages or natural language? Natural language 1. Expressive enough, but too much context- dependent. 2. Too ambiguous, because of the fuzzy semantics of connectives and modalities (or, some, many, etc.) 3. Because NL has communication, as well as representation role, often knowledge sharing is done without explicit knowledge representation. 4. Too vague for expressing logical reasoning. Programming languages (Java, LISP, etc.) 1. Good for representing certain and concrete information. For example, “there is a wumpus is some square” cannot be represented. 2. Require a complete description of the state of the computer, For example, “there is a pit in either [2,2] or [3,1]” cannot be represented. 3. Follows from (1) and (2) -- not expressive enough. 4. Good for describing well-structured worlds where sequences of events can be algorithmically described.

15 To formally express knowledge we need a language which is expressive
and concise, unambiguous and context independent, and computationally efficient. Among the languages that fulfill at least partially these requirements are: Propositional Logic (PL). It can represent only facts, which are true or false. First-Order Logic (FOL). It can represent objects, facts and relations between objects and facts, which are true or false. Temporal Logic. This is an extension of FOL which take the time into account. Probabilistic Logic. Limits the representation to facts only, but these facts can be uncertain, true or false. To express uncertainty, it attaches a degree of belief (0..1) to each fact. Truth Maintenance Logic. Represents facts only, but these can be both unknown and uncertain, in addition to true and false. Fuzzy Logic. Represents facts which are not only true or false, but true to some degree (the degree of truth is represented as a degree of belief).

16 Introduction to logic: basic terminology
Interpretation establishes a connection between sentences of the selected KR language and facts from the outside world. Example: Assume that A, B and C are sentences of our logic. If we refer to the “Moon world”, A may have the following interpretation “The moon is green”, B -- “There are people on the moon”, and C -- “It is sunny and nice on the moon, and people there eat a lot of green cheese". Given an interpretation, a sentence can be assigned a truth value. In PL, for example, it can be true or false, where true sentences represent facts that hold in the outside world, and false sentences represent facts that do not hold. Sentences may have different interpretations depending on the meaning given to them.

17 Example: Consider English language
Example: Consider English language. The word “Pope” is to be understood as a “microfilm”, and the word “Denver” is to be understood as “pumpkin on the left side of the porch”. In this interpretation, sentence “The Pope is in Denver” means “the microfilm is in the pumpkin”. Assume that we can enumerate all possible interpretations in all possible worlds that can be given to the sentences from our representation. Then, we have the following three types of sentences: Valid sentences (or tautologies). These are true in all interpretations. Example: (A v not A) is always true even if we refer to the “Moon world” (“There are people on the moon or there are no people on the moon”). Satisfiable sentences. These are true in some interpretations and false in others. Example: “The snow is red and the day is hot” is a satisfiable sentence if this is the case on Mars. Unsatisfiable sentences. These are false in all interpretation. Example: (A & not A).

18 Propositional logic To define any logic, we must address the following three questions: 1. How to make sentences (i.e. define the syntax). 2. How to relate sentences to facts (i.e. define the semantics). 3. How to generate implicit consequences (i.e. define the proof theory). From the syntactic point of view, sentences are finite sequences of primitive symbols. Therefore, we must first define the alphabet of PL. It consists of the following classes of symbols: propositional variables A, B, C ... logical constants true and false parentheses (, ) logical connectives &, v, <=>, =>, not

19 Well-formed formulas (wff)
Given the alphabet of PL, a wff (or sentence, or proposition) is inductively defined as: a propositional variable; A v B, where A, B are sentences; A & B, where A, B are sentences; A => B, where A, B are sentences; A <=> B, where A, B are sentences; not A, where A is a sentence; true is a sentence; false is a sentence. The following hierarhy is imposed on logical operators: not, &, v, =>, <=>. Composite statements are evaluated with respect to this hierarhy, unless parentheses are used to alter it. Example: ((A & B) => C) is equivalent to A & B => C (A & (B => C)) is a different sentence.

20 The semantics of PL is defined by specifying the interpretation of wwfs and the meaning of logical connectives. If a sentence is composed by only one propositional symbol, then it may have any possible interpretation. Depending on the interpretation, the sentence can be either true or false (i.e. satisfiable). If a sentence is composed by a logical constant (true or false), then its interpretation is fixed: true has as its interpretation a true fact; false has as its interpretation a false fact. If a sentence is composite (complex), then its meaning is derived from the meaning of its parts as follows (such semantics is called compositional, and this is known as a truth table method): P Q not P P & Q P v Q P => Q P <=> Q F F T F F T T F T T F T T F T F F F T F F T T F T T T T

21 Example: using a truth table, define the validity of P & (Q & R) <=> (P & Q) & R
P Q R Q & R P & (Q & R) (P & Q) & R P & (Q & R)<=>(P & Q) & R F F F F F F T T F F F F F T F T F F F F T T T F F F F T F F T F F F T T F T F F F T F T T T F F T T T T T T T T This formula is valid, because it is true in all possible interpretations of its propositional variables. It is known as the “associativity of conjunction” law.

22 Model and entailment Any world in which a sentence is true under a particular interpretation is called a model of that sentence under that interpretation. With a notion of a model, we can provide the following more precise definition of entailment: Sentence A is entailed by a given KB (KB |== A), if the models of KB are all models of A, i.e. whenever KB is true, A is also true. Two sentences, A and B, are logically equivalent (A  B) iff they are true in same models, i.e. A |== B AND B |== A. See table 7.11 for standard logical equivalences in PL.

23 Models of complex sentences are defined in terms of the models of their components as follows:
The models of P V Q are all the models of P AND all the models of Q. The models of P & Q are all COMMON models of P and Q. The models of P  Q  not P V Q are all interpretations that are NOT models of P AND all models of Q. The models of P  Q  (P  Q & Q  P)  ((not P V Q) & (not Q V P)) are all interpretations that are NOT models of either P or Q (i.e. all models of not P and not Q) AND all common models for P and Q.

24 Wumpus world and the Truth Tables Method (see figure 7.9)
Here is just a fraction of the KB in state [2, 1]: KB = {not W[1,1], not P[1,1], not B[1,1], not G[1,1], not W[1,2], not P[1,2], not W[2,1], not P[2,1], B[2,1], not G[2,1], P[2,2] v P[3,1], not W[2,2], not W[3,1], ……} To define entailment, we can use the “model-checking” algorithm, where: The entire truth table must be explicitly defined in terms of the number of propositional variables W, P, G, etc. where each variable is multiplied by the number of “time segments”. A recursive depth-first enumeration of all models is “run”, where for each model, the entailment of a given formula is checked. Note that a formula is entailed by the KB iff it is true in every model of the KB. This algorithm is sound because it implements directly the notion of entailment, and complete because it works on any KB and any formula. It always terminates because there are finite models of the KB. The algorithm is exponential in terms of time and linear in term of memory (the latter is due to the depth-first strategy, otherwise it would be exponential is terms of memory as well.)

25 Wang’s algorithm: a more efficient way to prove the validity of a formula
The truth tables method, also referred to as “proof by perfect induction” works from variables to sub-formulas. Wang’s algorithm works in the opposite direction. It uses graduate simplification plus divide-and-concur strategy to simplify goals until they are either reduced to axioms or to obviously unprovable formulas. Let the KB be represented as a “sequent” of the form: Premise1, Premise2, ... , PremiseN ===>s ===>s Conclusion1, Conclusion2, ... , ConclusionM Wang’s algorithm transforms such sequents by means of seven rules, two of which are termination rules. Other five rules either eliminate a connective (thus shortening the sequent), or eliminate the implication.

26 Example Fact1: If Bob failed to enroll in CS462, then Bob will not graduate this Spring. Fact2: If Bob will not graduate this Spring, then Bob will miss a great job. Fact3: Bob will not miss a great job. Question: Did Bob fail to enroll in CS462? First, we must decide on the representation. Assume that: P means “Bob failed to enroll in CS462” , Q means “Bob will not graduate this Spring” , and R means “Bob will miss a great job”. The problem is now described by means of the following PL formulas: Fact 1: P --> Q Fact 2: Q --> R Fact 3: not R Question: P or not P Here fact1, fact2, and fact3 are called premises.

27 The question answered by using a truth table
Propositional Premises Possible variables Conclusions P Q R P --> Q Q --> R not R P not P F F F T T T F T T F F F T T T F F T F T F T F T T T F T F T T F F F T T T F F T T F T F T F T F F T T T T F F T T T T T T F T F

28 Wang’s algorithm: transformation rules
Wang’s algorithm takes a KB represented as a set of sequents and applies rules R1 through R5 to transform the sequents into simpler ones. Rules R6 and R7 are termination rules which check if the proof has been found. Bob’s example. The KB initially contains the following sequent: P --> Q, Q --> R, not R ===>s not P Here P --> Q, Q --> R, not R, not P are referred to as top-level formulas. Transformation rules: R1 (“not on the left / not on the right” rule). If one of the top level formulas of the sequent has the form not X, then this rule says to drop the negation and move X to the other side of the sequent arrow. If not X is on the left of ===>s, the transformation is called “not on the left; otherwise, it is called “not on the right”.

29 R2 (“& on the left / v on the right” rule)
R2 (“& on the left / v on the right” rule). If a top-level formula on the left of the arrow has the form X & Y, or a top-level formula on the right of the arrow has the form X v Y, then the connective (&, v) can be replaced by a comma. R3 (“v on the left” rule). If a top-level formula on the left has a form X v Y, then replace the sequent with two new sequents: X ===> s and Y ===>s R4 (“& on the right” rule). If a top-level formula on the right has a form X & Y, then replace the sequent with two new sequents: ===>s X and ===>s Y R5 (“implication elimination” rule). Any formula of the form X --> Y is replaced by not X v Y. R6 (“valid sequent” rule). If a top-level formula X occurs on both sides of the sequent, then the sequent is considered proved. Such a sequent is called an axiom. If the original sequent has been split into sequents, then all of these sequents must be proved in order for the original sequent to be proved. R7 (“invalid sequent” rule). If all of the formulas in a given sequent are propositional symbols (i.e. no future transformations are possible) and the sequent is not an axiom, then the sequent is invalid. If an invalid sequent is found, the algorithm terminates, and the original sequent is proved not to follow logically from the premises.

30 Bob’s example continued
Sequent: P --> Q, Q --> R, not R ===>s not P Apply R5 twice, to P --> Q and to Q --> R Sequent: (not P v Q), (not Q v R), not R ===>s not P Apply R1 to (not R) Sequent: (not P v Q), (not Q v R) ===>s not P, R Apply R3 to (not P v Q), resulting in splitting the current sequent into two new sequents. Sequents: not P, (not Q v R) ===>s not P, R Q, (not Q v R) ===>s not P, R Note that the first sequent is an axiom according to R6 (not P is on both sides of the sequent arrow). To prove that not P follows from the premises, however, we still need to prove the second sequent.

31 Bob’s example (cont.) Sequent: Q, (not Q v R) ===>s not P, R
Apply R3 to (not Q v R) Sequents: Q, not Q ===>s not P, R Q, R ===>s not P, R Here the second sequent is an axiom according to R6. Apply R1 to (not Q) to the first sequent Sequent: Q ===>s not P, R, Q According to R6, the resulting sequent is an axiom. All sequents resulting from different splits of the original sequent were proved, therefore not P follows from the premises P --> Q, Q --> R, not R .

32 Characterization of Wang’s algorithm
Wang’s algorithm is complete, i.e. it will always prove a formula if it is valid. The length of the proof depends on the order in which rules are applied. Wang’s algorithm is NP-complete in the worst case. However, in average case its performance in better compared to that of the truth tables method. Wang’s algorithm is sound, i.e. it will prove only formulas entailed by the KB. Wang’s algorithm was originally published in Wang, H. “Towards mechanical mathematics”, in IBM Journal of Research and Development, vol.4, 1960.

33 The Wumpus example continued
The KB describing the WW contains the following types of sentences: Sentences representing the agent’s percepts. These are propositions of the form: not S [1,1] : There is no stench in square [1,1] B [2,1] : There is a breeze in square [2,1] not W [1,1] : There is no wumpus in square [1,1], etc. Sentences representing the background knowledge (i.e. the rules of the game). These are formulas of the form: R1: not S [1,1] --> not W [1,1] & not W [1,2] & not W [2,1] R16: not S [4,4] --> not W [4,4] & not W [3,4] & not W [4,3] There are 16 rules of this type (one for each square). Rj: S [1,2] --> W [1,3] v W[1,2] v W [2,2] v W [1,1] There are 16 rules of this type.

34 WW knowledge base (cont.)
Rk: not B [1,1] --> not P [2,1] & not P [1,2] 16 rules of this type Rn: B [1,1] --> P [2,1] v P [1,2] Rp: not G [1,1] --> not G [1,1] Rs: G[1,1] --> G[1,1] Also, 32 rules are needed to handle the percept “Bump”, and 32 more to handle the percept “Scream”. Total number of rules dealing with percepts: 160.

35 WW knowledge base: limitations of PL
Rules describing the agent’s actions and rules representing “common-sense knowledge”, such as “Don’t go forward, if the wumpus is in front of you”. This rule will be represented by 48 sentences (4 squares * 4 orientations + 4 * * 3 ). If we want to incorporate the time into the representation, then each group of 64 sentences must be provided for each time segment. Assume that the problem is expected to take 100 time segments. Then, 4800 sentences are needed to represent only the rule “Don’t go forward, if the wumpus is in front of you”. Note: there are still many other rules of this type which will require thousands of sentences to be formally described. The third group of sentences in the agent’s KB are the entailed sentences. Note: using truth tables for identifying the entailed sentences is infeasible not only because of the large number of propositional variables, but also because of the huge number of formulas that will eventually be tested for validity.

36 The WW: solving one fragment of the agent’s problem by means of the Wang’s algorithm.
Consider the situation on Figure 7.4 and assume that we want to prove W [1,3] with the Wang’s algorithm. Current sequent: not S [1,1], not B [1,1], S [1,2], not B [1,2], not S [2,1], B [2,1], not S [1,1] --> not W [1,1] & not W [1,2] & not W [2,1], not S [2,1] --> not W [1,1] & not W [2,1] & not W [2,2] & not W [3,1], S [1,2] --> W [1,3] v W [1,2] v W [2,2] v W [1,1] ===>s W [1,3] Applying R1 to all negated single propositions on the left results in S [1,2], B [2,1], not S [1,1] --> not W [1,1] & not W [1,2] & not W [2,1], S [1,2] --> W [1,3] v W [1,2] v W [2,2] v W [1,1] ===>s W [1,3], S [1,1], B [1,1], B [1,2], S [2,1]

37 Applying R5 to the first two implications on the left in combination with R2, and only R5 to the third implication results in Current sequent: S [1,2], B [2,1], S [1,1] v not W [1,1], S [1,1] v not W [1,2], S [1,1] v not W [2,1], S [2,1] v not W [1,1], S [2,1] v not W [2,1], S [2,1] v not W [2,2], S [2,1] v not W [3,1], not S [1,2] v W [1,3] v W [1,2] v W [2,2] v W [1,1] ===>s W [1,3], S [1,1], B [1,1], B [1,2], S [2,1] Applying R3 to the last disjunct will result in Current sequents: S [2,1] v not W [3,1], not S [1,2] ===>s W [1,3], S [1,1], B [1,1], B [1,2], S [2,1] Moving not S [1,2] on the right by using R1 will prove that this is an axiom. S [2,1] v not W [3,1], W [1,3] ===>s W [1,3], S [1,1], B [1,1], B [1,2], S [2,1] proves that this is an axiom

38 S [1,2], B [2,1], S [1,1] v not W [1,1], S [1,1] v not W [1,2], S [1,1] v not W [2,1], S [2,1] v not W [1,1], S [2,1] v not W [2,1], S [2,1] v not W [2,2], S [2,1] v not W [3,1], W [1,2] ===>s W [1,3], S [1,1], B [1,1], B [1,2], S [2,1] Applying R3 to the underlined disjunct will create two new sequents, the first of which is an axiom because S [1,1] will be on both sides of the arrow, and the second after applying R1 (moving not W [1,2] to the right) will be proven because W [1,2] will be on both sides. S [2,1] v not W [3,1], W [2,2] ===>s W [1,3], S [1,1], B [1,1], B [1,2], S [2,1] Same as above to prove that the two resulting sequents are axioms. S [2,1] v not W [3,1], W [1,1] ===>s W [1,3], S [1,1], B [1,1], B [1,2], S [2,1] All sequents proven, therefore W [1,3] follows from the initial premises.


Download ppt "Knowledge-based Agents (KBAs)"

Similar presentations


Ads by Google