Chapter 3 Heuristic Search Techniques

Presentation on theme: "Chapter 3 Heuristic Search Techniques"— Presentation transcript:

Chapter 3 Heuristic Search Techniques
Artificial Intelligence ดร.วิภาดา เวทย์ประสิทธิ์ ภาควิชาวิทยาการคอมพิวเตอร์ คณะวิทยาศาสตร์ มหาวิทยาลัยสงขลานครินทร์

Production System The End Working memory
Production set = Rules Figure 5.3 Trace Figure 5.4 Data driven Figure 5.9 Goal driven Figure 5.10 Iteration # Conflict sets Rule fired The End Artificial Intelligence Lecture 7-12 Page 2

And-Or Graph The End a Data driven Goal driven b c d e f g
Artificial Intelligence Lecture 7-12 Page 3

Generate-and-test The End Generate all possible solutions
DFS + backtracking Generate randomly Test function  yes/no Algorithm page 64 The End Artificial Intelligence Lecture 7-12 Page 4

Hill Climbing The End Similar to generate-and-test
Test function + heuristic function Stop Goal state meet No alternative state to move The End Artificial Intelligence Lecture 7-12 Page 5

Simple Hill Climbing The End
Task specific knowledge into the control process Is one state better than another The first state is better than the current state Algorithm page 66 The End Artificial Intelligence Lecture 7-12 Page 6

Steepest-Ascent Hill Climbing
Consider all moves from the current state Select the best one as the next state Algorithm page 67 Searching time? The End Artificial Intelligence Lecture 7-12 Page 7

The End Hill Climbing Problem No solution found : Problem
Local maximum : a state that is better than all its neighbors but it is not better than some other states farther away. backtracking Plateau : a flat area of the search space in which a whole set of neighboring states have the same value. It is not possible to determine the best direction by using local comparison. Make big jump The End Artificial Intelligence Lecture 7-12 Page 8

The End Hill Climbing Problem
Ridge : an area of the search space that is higher than surrounding areas and itself has a slope. We can not do with a single move. Fired more rules for several direction The End Artificial Intelligence Lecture 7-12 Page 9

Hill Climbing Characteristic
Local method It decides what to do next by looking only at the immediate consequences of its choice (rather than by exhaustively exploring all of the consequence) Look only one more ahead The End Artificial Intelligence Lecture 7-12 Page 10

Local heuristic function
Block world figure 3.1 p. 69 Local heuristic function 1. Add one point for every block that is resting on the thing it is supposed to be resting on. 2. Subtract one point for every block that is sitting on the wrong thing. Initial state score = 4 (6-2) C,D,E,F,G,H correct = 6 A,B wrong = -2 Goal state score = 8 A,B,C,D,E,F,F,H all correct The End Artificial Intelligence Lecture 7-12 Page 11

Local heuristic function
Current state : จากรูป 3.1 หยิบ A วางบนโต๊ะ B C D E F G H วางเรียงเหมือนเดิม Score = 6 (B C D E F G H correct) Block world figure 3.2 p. 69 Next state score = 4 All 3 cases Stop : no better score than the current state = 6 Local minimum problem ติดอยู่ในกลุ่มระดับ local มองไปไม่พ้นอ่าง The End Artificial Intelligence Lecture 7-12 Page 12

Global heuristic function
Block world figure 3.1 p. 69 Global heuristic function For each block that has the correct support structure add one point for every block in the support structure. (นับหมด) For each block that has an incorrect support structure, subtract one point for every block in the existing support structure. The End Artificial Intelligence Lecture 7-12 Page 13

Global heuristic function
initial state score = -28 C = -1, D = -2, E = -3, F = -4, G = -5, H = -6, A = -7 Goal state score = 28 B = 1, C = 2, D = 3, E = 4, F = 5, G = 6, H = 7 The End Artificial Intelligence Lecture 7-12 Page 14

Global heuristic function
Current state : จากรูป 3.1 หยิบ A วางบนโต๊ะ B C D E F G H วางเรียงเหมือนเดิม Score = -21 (C = -1, D = -2, E= -3, F = -4, G= -5, H = -6) Block world figure 3.2 p. 69 Next state : move to case(c) Case(a) = -28 same as initial state Case(b) = -16 (C = -1, D = -2, E= -3, F = -4, G= -5, H = -1) Case(c) = -15 (C = -1, D = -2, E= -3, F = -4, G= -5) No Local minimum problem  It’s work The End Artificial Intelligence Lecture 7-12 Page 15

New heuristic function
1. Incorrect structure are bad and should be taken apart. More subtract score 2. Correct structure are good and should built up. Add more score for the correct structure. สิ่งที่เราต้องพิจารณา How to find a perfect heuristic function? เข้าไปในเมองที่ไม่เคยไปจะหลีกเลี่ยงทางตัน dead end ได้อย่างไร The End Artificial Intelligence Lecture 7-12 Page 16

The End Simulated Annealing Hill climbing variation
At the beginning of the process some down hill moves may be made. Do enough exploration of the whole space early on so that the final solution is relatively insensitive to the starting state. ป้องกันปัญหา local maximum, plateau,ridge Use objective function (not heuristic function) Use minimize value of objective function The End Artificial Intelligence Lecture 7-12 Page 17

The End Simulated Annealing Annealing schedule
ถ้าเราทำให้เย็นเร็วมาก จะได้ผลลัพธ์ high energy อาจเกิด local minimum ได้ ถ้าเราทำให้เย็นช้ามาก จะได้ผลลัพธ์ดี แต่เสียเวลามาก at low temperatures a lot of time may be wasted after the final structure has already been formed. ควรทำแบบพอดี empirical structure The End Artificial Intelligence Lecture 7-12 Page 18

The End p = e Simulated Annealing Annealing : metals are melted
Cool down to get the solid structure Objective function : energy level Try to use less energy P : probability T : temperature : annealing schedule K : Boltzmann’s constant : describe the correspondence between the units of temperature and the unit of energy E = ( value of current) – (value of new state) positive change in the energy The End - e/KT p = e Artificial Intelligence Lecture 7-12 Page 19

The End Simulated Annealing
Probability of a large uphill move is lower than probability of a small uphill move Probability uphill move decrease when temperature decrease. In the beginning of the annealing large upward moves may occur early on Downhill moves are allowed anytime Only relative small upward moves are allowed until finally the process converges to a local minimum configuration Artificial Intelligence Lecture 7-12 Page 20

The End p = e Simulated Annealing - e/KT
Artificial Intelligence Lecture 7-12 Page 21

The End Algorithm Simulated Annealing
เหมาะสำหรับปัญหาที่มีจำนวน move มากๆ หลักการ 1. What is initial Temperature 2. Criteria for decreasing T 3. Level to decrease T value 4. When to quit ข้อสังเกต 1. When T approach 0 simulated annealing identical with simple hill climbing The End Artificial Intelligence Lecture 7-12 Page 22

The End Algorithm Simulated Annealing
ข้อแตกต่าง Algorithm Simulated Annealing p.71 และ Hill Climbing 1. The annealing schedule must be maintained. 2. Move to worse states may be accepted. 3. Maintain the best state found so far. If the final state is worse than that earlier state, then earlier state is still available. Artificial Intelligence Lecture 7-12 Page 23

The End Best first search OR GRAPH : Search in the graph
Heuristic function : min value page 74 Artificial Intelligence Lecture 7-12 Page 24

Best First Search OR GRAPH : each of its branches represents an alternative problem-solving pattern we assumed that we could evaluate multiple paths to the same node independently of each other we want to find a single path to the goal use DFS : select most promising path use BSF : when no promising path/ switch part to receive the better value old branch is not forgotten solution can be find without all completing branches having to expanded Artificial Intelligence Lecture 7-12 Page 25

Best First Search f’ = g + h’ g: cost from initial state to current state h’: estimate cost current state to goal state f’: estimate cost initial state to goal state Open node : most promising node Close node : keep in memory, already discover node. Artificial Intelligence Lecture 7-12 Page 26

Best First Search Algorithm
page 75-76 Artificial Intelligence Lecture 7-12 Page 27

The End A* algorithm f’ = g + h’
h’ : count the nodes that we step down the path, 1 level down = 1 point, except the root node. Underestimate : we generate up until f’(F)= 6 > f’(C) =5 then we have to go back to C. The End f’ = g + h’ 1 level f’(E) = f’(C) = 5 2 level 3 level Artificial Intelligence Lecture 7-12 Page 28

The End A* algorithm f’ = g + h’
Overestimate : Suppose the solution is under D : we will not generate D because F’(D) = 6 > f’(G) = 4. f’ = g + h’ 1 level 2 level 3 level Artificial Intelligence Lecture 7-12 Page 29

A* Algorithm page 76 323-670 Artificial Intelligence Lecture 7-12

A* Algorithm Artificial Intelligence Lecture 7-12 Page 31

The End Agenda Agenda : a list of tasks a system could perform.
a list of reasons a rating representing overall weight of evidence suggesting that the task would be useful When a new task is created, insert into the agenda in its proper place, we need to re-compute its rating and move it to the correct place in the list find the better location put at the end of agenda need a lot more time to compute a new rating The End Artificial Intelligence Lecture 7-12 Page 32

The End Not acceptable dialog
agenda is not good for when interacting with people page 81-82 person China computer person Italy computer person computer China something reasonable now may not be continue to be so after the conversation has processed for a while. The End Artificial Intelligence Lecture 7-12 Page 33

Agenda The End Artificial Intelligence Lecture 7-12 Page 34

The End And-Or Graph / Tree
can be solved by decomposing them into a set of smaller problems And arcs are indicated with a line connecting all the components The End Artificial Intelligence Lecture 7-12 Page 35

The End And-Or Graph / Tree
each arc with the successor has a cost of 1 The End choose lowest value = f’(B) = 5 ABEF = Artificial Intelligence Lecture 7-12 Page 36

The End And-Or Graph / Tree
Futility : some value use to compare the result/ threshold value If... the estimate cost of a solution > Futility then.....abandon the search The End Artificial Intelligence Lecture 7-12 Page 37

The End Problem Reduction 323-670 Artificial Intelligence Lecture 7-12
Page 38

The End Problem Reduction E come from J not C
Artificial Intelligence Lecture 7-12 Page 39

The End Problem Reduction
Can not find a solution from this algorithm because of C Artificial Intelligence Lecture 7-12 Page 40

Problem Reduction : AO* Algorithm
The End Artificial Intelligence Lecture 7-12 Page 41

Problem Reduction : AO* Algorithm
use a single structure GRAPH we will not store g algorithm will insert all ancestor nodes into a set path C will always be better than path B Artificial Intelligence Lecture 7-12 Page 42

Problem Reduction : AO* Algorithm
change G from 5 to 10 no backward propagation need backward propagation Artificial Intelligence Lecture 7-12 Page 43