Download presentation
Presentation is loading. Please wait.
1
Russell and Norvig: Chapter 4
Heuristic Search Russell and Norvig: Chapter 4
2
Current Bag of Tricks Search algorithm Heuristic Search
3
Current Bag of Tricks Search algorithm Modified search algorithm
Heuristic Search
4
Current Bag of Tricks Search algorithm Modified search algorithm
Avoiding repeated states 어떤 때에는 검사를 위한 시간이 더 걸릴 수도 있음 – Search Space가 매우 클 때 Heuristic Search
5
Best-First Search Define a function: f : node N real number f(N) called the evaluation function, whose value depends on the contents of the state associated with N Order the nodes in the fringe in increasing values of f(N) 순서화 하기 위한 방법이 중요, heap의 사용도 한 방법 --- 이유는? f(N) can be any function you want, but will it work? Heuristic Search
6
Example of Evaluation Function
f(N) = (sum of distances of each tile to its goal) + 3 x (sum of score functions for each tile) where score function for a non-central tile is 2 if it is not followed by the correct tile in clockwise order and 0 otherwise 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 f(N) = 3x( ) = 49 goal N Heuristic Search
7
Heuristic Function Function h(N) that estimate the cost of the cheapest path from node N to goal node. Example: 8-puzzle 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 h(N) = number of misplaced tiles = 6 goal N Heuristic Search
8
Heuristic Function Function h(N) that estimate the cost of the cheapest path from node N to goal node. Example: 8-puzzle 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 h(N) = sum of the distances of every tile to its goal position = = 13 goal N Heuristic Search
9
Robot Navigation YN XN h(N) = Straight-line distance to the goal
= [(Xg – XN)2 + (Yg – YN)2]1/2 Heuristic Search
10
Examples of Evaluation function
Let g(N) be the cost of the best path found so far between the initial node and N f(N) = h(N) greedy best-first f(N) = g(N) + h(N) Heuristic Search
11
8-Puzzle f(N) = h(N) = number of misplaced tiles 3 4 3 4 5 3 3 4 2 4 4
4 4 2 1 Heuristic Search
12
8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 3+3
3+4 1+5 1+3 2+3 2+4 5+2 5+0 0+4 3+4 3+2 4+1 Heuristic Search
13
8-Puzzle f(N) = h(N) = distances of tiles to goal 6 4 5 3 2 5 4 2 1
5 4 2 1 Heuristic Search
14
Can we Prove Anything? If the state space is finite and we avoid repeated states, the search is complete, but in general is not optimal If the state space is finite and we do not avoid repeated states, the search is in general not complete If the state space is infinite, the search is in general not complete Heuristic Search
15
Admissible heuristic Let h*(N) be the cost of the optimal path from N to a goal node Heuristic h(N) is admissible if: h(N) h*(N) An admissible heuristic is always optimistic Heuristic Search
16
8-Puzzle 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 N goal h1(N) = number of misplaced tiles = 6 is admissible h2(N) = sum of distances of each tile to goal = is admissible h3(N) = (sum of distances of each tile to goal) x (sum of score functions for each tile) = is not admissible Heuristic Search
17
Robot navigation Cost of one horizontal/vertical step = 1
Cost of one diagonal step = 2 h(N) = straight-line distance to the goal is admissible Heuristic Search
18
A* Search Evaluation function: f(N) = g(N) + h(N) where:
g(N) is the cost of the best path found so far to N h(N) is an admissible heuristic 0 < c(N,N’) Then, best-first search with “modified search algorithm” is called A* search Heuristic Search
19
Completeness & Optimality of A*
Claim 1: If there is a path from the initial to a goal node, A* (with no removal of repeated states) terminates by finding the best path, hence is: complete optimal Heuristic Search
20
8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 3+3
3+4 1+5 1+3 2+3 2+4 5+2 5+0 0+4 3+4 3+2 4+1 Heuristic Search
21
Robot Navigation Heuristic Search
22
Robot Navigation f(N) = h(N), with h(N) = Manhattan distance to the goal 2 1 5 8 7 3 4 6 Heuristic Search
23
Robot Navigation f(N) = h(N), with h(N) = Manhattan distance to the goal 8 7 6 5 4 3 2 3 4 5 6 7 5 4 3 5 6 3 2 1 1 2 4 7 7 6 5 8 7 6 5 4 3 2 3 4 5 6 Heuristic Search
24
Robot Navigation f(N) = g(N)+h(N), with h(N) = Manhattan distance to goal 8+3 7+4 7+2 8+3 2 1 5 8 7 3 4 6 7+4 6+5 5+6 6+3 4+7 5+6 3+8 4+7 3+8 2+9 2+9 3+10 7+2 6+1 2+9 3+8 6+1 8+1 7+0 2+9 1+10 1+10 0+11 7+0 7+2 6+1 8+1 7+2 6+3 6+3 5+4 5+4 4+5 4+5 3+6 3+6 2+7 2+7 3+8 Heuristic Search
25
Robot navigation f(N) = g(N) + h(N), with h(N) = straight-line distance from N to goal Cost of one horizontal/vertical step = 1 Cost of one diagonal step = 2 Heuristic Search
26
About Repeated States N N1 S S1 N2 g(N1) < g(N) h(N) < h(N1)
f(N)=g(N)+h(N) f(N1)=g(N1)+h(N1) N2 f(N2)=g(N2)+h(N) g(N1) < g(N) h(N) < h(N1) f(N) < f(N1) g(N2) < g(N) Heuristic Search
27
Consistent Heuristic The admissible heuristic h is consistent (or satisfies the monotone restriction) if for every node N and every successor N’ of N: h(N) c(N,N’) + h(N’) (triangular inequality) N N’ h(N) h(N’) c(N,N’) Heuristic Search
28
8-Puzzle 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 N goal h1(N) = number of misplaced tiles h2(N) = sum of distances of each tile to goal are both consistent Heuristic Search
29
Robot navigation Cost of one horizontal/vertical step = 1
Cost of one diagonal step = 2 h(N) = straight-line distance to the goal is consistent Heuristic Search
30
Claims If h is consistent, then the function f along any path is non-decreasing: f(N) = g(N) + h(N) f(N’) = g(N) +c(N,N’) + h(N’) N N’ h(N) h(N’) c(N,N’) Heuristic Search
31
Claims If h is consistent, then the function f along any path is non-decreasing: f(N) = g(N) + h(N) f(N’) = g(N) +c(N,N’) + h(N’) h(N) c(N,N’) + h(N’) f(N) f(N’) N N’ h(N) h(N’) c(N,N’) Heuristic Search
32
Claims If h is consistent, then the function f along any path is non-decreasing: f(N) = g(N) + h(N) f(N’) = g(N) +c(N,N’) + h(N’) h(N) c(N,N’) + h(N’) f(N) f(N’) If h is consistent, then whenever A* expands a node it has already found an optimal path to the state associated with this node N N’ h(N) h(N’) c(N,N’) Heuristic Search
33
Avoiding Repeated States in A*
If the heuristic h is consistent, then: Let CLOSED be the list of states associated with expanded nodes When a new node N is generated: If its state is in CLOSED, then discard N If it has the same state as another node in the fringe, then discard the node with the largest f Heuristic Search
34
Complexity of Consistent A*
Let s be the size of the state space Let r be the maximal number of states that can be attained in one step from any state Assume that the time needed to test if a state is in CLOSED is O(1) The time complexity of A* is O(s r log s) Heuristic Search
35
Heuristic Accuracy h(N) = 0 for all nodes is admissible and consistent. Hence, breadth-first and uniform-cost are particular A* !!! Let h1 and h2 be two admissible and consistent heuristics such that for all nodes N: h1(N) h2(N). Then, every node expanded by A* using h2 is also expanded by A* using h1. h2 is more informed than h1 Heuristic Search
36
Iterative Deepening A* (IDA*)
Use f(N) = g(N) + h(N) with admissible and consistent h Each iteration is depth-first with cutoff on the value of f of expanded nodes Heuristic Search
37
8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles
4 6 Cutoff=4 Heuristic Search
38
8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles
4 6 4 6 Cutoff=4 Heuristic Search
39
8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles
4 6 4 5 6 Cutoff=4 Heuristic Search
40
8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles
5 4 6 4 5 6 Cutoff=4 Heuristic Search
41
8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles
6 5 4 6 4 5 6 Cutoff=4 Heuristic Search
42
8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles
4 6 Cutoff=5 Heuristic Search
43
8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles
4 6 4 6 Cutoff=5 Heuristic Search
44
8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles
4 6 4 5 6 Cutoff=5 Heuristic Search
45
8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles
4 6 4 5 6 Cutoff=5 7 Heuristic Search
46
8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles
4 5 6 4 5 6 7 Cutoff=5 Heuristic Search
47
8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles
4 5 5 6 4 5 6 7 Cutoff=5 Heuristic Search
48
8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles
4 5 5 6 4 5 6 7 Cutoff=5 Heuristic Search
49
About Heuristics Heuristics are intended to orient the search along promising paths The time spent computing heuristics must be recovered by a better search After all, a heuristic function could consist of solving the problem; then it would perfectly guide the search Deciding which node to expand is sometimes called meta-reasoning Heuristics may not always look like numbers and may involve large amount of knowledge Heuristic Search
50
Local-minimum problem
Robot Navigation Local-minimum problem f(N) = h(N) = straight distance to the goal Heuristic Search
51
What’s the Issue? Search is an iterative local procedure
Good heuristics should provide some global look-ahead (at low computational cost) Heuristic Search
52
Other Search Techniques
Steepest descent (~ greedy best-first with no search) may get stuck into local minimum Heuristic Search
53
Other Search Techniques
Steepest descent (~ greedy best-first with no search) may get stuck into local minimum Simulated annealing Heuristic Search
54
Other Search Techniques
Steepest descent (~ greedy best-first with no search) may get stuck into local minimum Simulated annealing Genetic algorithms Heuristic Search
55
Informed Search methods
56
Best-first Search Evaluation function ::: a number purporting to described the desirability of expanding the node Best-first search ::: a node with the best evaluation is expanded first 자료구조가 중요!!! Queue를 사용하되 linked list로 구현??? Heuristic Search
57
Greedy search heuristic function h(n)
h(n) = estimated cost of the cheapest path from the state at node n to a goal state 현재부터 가장 가까운 거리에 있는 것 Hill climbing method Heuristic Search
58
Iterative improvement algorithm
Hill climbing method Simulated annealing Neural networks Genetic algorithm Heuristic Search
59
Hill climbing Method Simple hill climbing method ::: 연산을 적용한 결과가 goal인지 검사 후, 아니면 기존보다 좋으면 무조건 선택, 아니면 다른 연산 적용, 더 이상 적용할 연산이 없으면 실패 선택한 노드에서 다시 탐색 Steepest-ascent hill climbing method ::: 현재 노드(상태)에 모든 가능한 연산을 적용한 결과에 goal이 있으면 성공, 아니면 제일 좋은 것을 선택하여, 기존보다 좋으면 그 노드에서 다시 탐색, 아니면 실패 Complete하지 못함 Heuristic Search
60
예제 A H G F E D C B H G F E D C B A Heuristic Search
61
First move H G F E D C B A Heuristic Search
62
Second move(Three possible moves)
A H G F E D C B G F E D C B G F E D C B H A A H (a) (b) (c) Heuristic Search
63
Two heuristics Local : Add one point for every block that is resting on the thing it is supposed to be resting on. Subtract one point for every block that is sitting on the wrong thing goal : 8 initial state : 4 first move : 6 second move: (a) 4, (b) 4, (c) 4 Global : For each block that has the correct support structure, add one point for every block in the support structure. For each block that has an incorrect support structure, subtract one point for every block in the existing support structure goal : 28, initial : -28; first move : -21; second move: (a) –28, (b) –16, (c)-15 Heuristic Search
64
Hill climbing method의 약점
Local maxima Plateau Ridges Random-restart hill climbing Simulated annealing Heuristic Search
65
Simulated annealing 알고리즘 이동성 P=e∆E /T (P=e-∆E /kT )
for t 1 to ∞ do Tschedule[t] if T=0 then return current next a randomly selected successor of current ∆E Value[next]-Value[current] if ∆E>0 then current next else current next only with probability e∆E /T 이동성 P=e∆E /T (P=e-∆E /kT ) 일반적으로 에너지가 낮은 방향으로 물리현상은 일어나지만 높은 에너지 상황으로 변하는 확률이 존재한다. ∆E : positive change in energy level T : Temperature k : Boltzmann’s constant Heuristic Search
66
Heuristic Search f(n)=g(n)+h(n)
f(n) = estimated cost of the cheapest solution through n Best-first search Heuristic Search
67
Minimizing the total path cost (A* algorithm)
Admissible heuristics ::: an h function that never overestimates the cost to reach the goal If h is a admissible, f(n) never overestimates the actual cost of the best solution through n Heuristic Search
68
The behavior of A* search
Monontonicity ::: along any path from the root, the f-cost never decreases -- underestimate하면 non-monotonic할 수 있음 Complete and optimal f(n’) = max(f(n),g(n’)+h(n’)) (n이 n’의 parent node) optimality와 completeness를 증명한다 Heuristic Search
69
A* Algorithm의 예 d IDS A*(h1) A*(h2) 14 24
8(16)-puzzle에서 City block distance (Manhattan distance)를 사용 : 절대 overestimate하지 않는다… h1 ::: the number of titles in wrong position h2 ::: Manhattan distance 결과비교 d IDS A*(h1) A*(h2) 14 (2.83) 539(1.42) 73(1.24) 24 불가능 39135(1.48) 1641(1.26) Heuristic Search
70
Inventing heuristic functions
Relaxed problem (a) 타일은 인접한 장소로 움직일 수 있다 (b) 타일은 빈자리로 이동할 수 있다 (c) 타일은 다른 위치로 이동할 수 있다 h(n)=max(h1(n), …,hn(n)) 실제 값과 가장 가까이 예측하는 함수가 좋다 Heuristic Search
71
Heuristics for CSPs Most-constrained-variable heuristics
:: forward checking the variable with fewest possible values is chosen to have a value assigned Most-constraining-variable heuristics :: assigning a value to the variable that is involved in the largest number of constraints on the unassigned variables (reducing the branching factor) Heuristic Search
72
Example Heuristic Search
73
Memory Bounded Search Simplified Memory-Bound A* Heuristic Search
74
Machine Evolution
75
Evolutions Generations of descendants Search processes
Production of descendants changed from their parents Selective survival Search processes Searching for high peaks in the hyperspace Heuristic Search
76
Applications Function optimization Solving specific problems
The maximum of a function ::: John Holland Solving specific problems Control reactive agents Classifier systems Genetic programming Heuristic Search
77
A program expressed as a tree
Heuristic Search
78
A robot to follow the wall around forever
Primitive functions : AND, OR, NOT, IF Boolean functions AND(x,y) = 0 if x = 0; else y OR(x,y) = 1 if x = 1; else y NOT(x) = 0 if x = 1; else 1 IF(x,y,Z) = y if x = 1; else z Actions North, east, south, west Heuristic Search
79
A robot to follow the wall around forever
All of the action functions have their indicated effects unless the robot attempts to move into the wall Sensory inputs ::: n, ne, e, se, s , sw, w, nw 만약 함수의 수행결과가 값이 없으면 중지 Heuristic Search
80
A robot in a Grid World Heuristic Search
81
A wall following program
Heuristic Search
82
The GP process Generation 0 (0세대): start with a population of random programs with functions, constants, and sensory inputs 5000 random programs Final : Generation 62 60 steps 동안 벽에 있는 방을 방문한 횟수로 평가 32 cells이면 perfects; 10곳에서 출발하여 fitness 측정 Heuristic Search
83
Generation of populations I
(i+1)th generation 10%는 i-the generation에서 copy 5000 populations에서 무작위로 7개를 선택하여 가장 우수한 것을 선택 (tournament selection) 90%는 앞의 방법으로 두 프로그램(a mother, a father)을 선택하여, 무작위로 선정한 father의 subtree를 mother의 subtree에 넣는다 (crossover) Heuristic Search
84
Crossover Heuristic Search
85
Generation of populations II
Mutation : 1%를 tournament로 선정 무작위로 선택한 subtree를 제거하고, 1세대에서 개체를 생성하는 방법으로 만들어서 끼워넣는다 Heuristic Search
86
Evolving a wall-following robot
개별 프로그램의 예 (AND (sw) (ne)) (with fitness 0) (OR (e) (west) (with fitness 5(?)) the best one ::: fitness = 92 (어떤 때) Heuristic Search
87
The most fit individual in generation 0
Heuristic Search
88
The most fit individuals in generation 2
Heuristic Search
89
The most fit individuals in generation 6
Heuristic Search
90
The most fit individuals in generation 10
Heuristic Search
91
Fitness as a function of generation number
Heuristic Search
92
숙제 Specify fitness functions for use in evolving agents that
Control an elevator Control stop lights on a city main street Determine what the words genotype and phenotype in evolutionary theory? Why do you think mutation might or might no be helpful in evolutionary processes that use crossover? Heuristic Search
93
When to Use Search Techniques?
The search space is small, and There is no other available techniques, or It is not worth the effort to develop a more efficient technique The search space is large, and There is no other available techniques, and There exist “good” heuristics Heuristic Search
94
Summary Heuristic function Best-first search
Admissible heuristic and A* A* is complete and optimal Consistent heuristic and repeated states Heuristic accuracy IDA* Heuristic Search
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.