Presentation is loading. Please wait.

Presentation is loading. Please wait.

1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

Similar presentations


Presentation on theme: "1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami."— Presentation transcript:

1 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami

2 2/34 Material Chapter 4

3 3/34 Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search Simulated annealing search Local beam search Genetic algorithms

4 4/34 Review: Tree search A search strategy is defined by picking the order of node expansion

5 5/34 Best-first search جستجوي اول بهترين Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation: Order the nodes in fringe in decreasing order of desirability Special cases: –greedy best-first search –A * search

6 6/34 Romania with step costs in km

7 7/34 Greedy best-first search Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal e.g., h SLD (n) = straight-line distance from n to Bucharest Greedy best-first search expands the node that appears to be closest to goal

8 8/34 Greedy best-first search example

9 9/34 Greedy best-first search example

10 10/34 Greedy best-first search example

11 11/34 Greedy best-first search example

12 12/34 Properties of greedy best-first search Complete? No – can get stuck in loops, e.g., Iasi  Neamt  Iasi  Neamt  Time? O(b m ), but a good heuristic can give dramatic improvement Space? O(b m ) -- keeps all nodes in memory Optimal? No

13 13/34 A * search Idea: avoid expanding paths that are already expensive Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n –h(n) = estimated cost from n to goal –f(n) = estimated total cost of path through n to goal

14 14/34 A * search example

15 15/34 A * search example

16 16/34 A * search example

17 17/34 A * search example

18 18/34 A * search example

19 19/34 A * search example

20 20/34 Admissible heuristics A heuristic h(n) is admissible if for every node n, h(n) ≤ h * (n), where h * (n) is the true cost to reach the goal state from n. An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic Example: h SLD (n) (never overestimates the actual road distance) Theorem: If h(n) is admissible, A * using TREE- SEARCH is optimal

21 21/34 Optimality of A * (proof) Suppose some suboptimal goal G 2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(n)= g(n)+h(n) f(G 2 ) = g(G 2 )since h(G 2 ) = 0 g(G 2 ) > g(G) since G 2 is suboptimal f(G) = g(G)since h(G) = 0 f(G 2 ) > f(G)=C*from above

22 22/34 Optimality of A * (proof) Suppose some suboptimal goal G 2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(n)= g(n)+h(n) f(G 2 )> C*=f(G) (1)from above h(n)≤ h*(n)since h is admissible g(n) + h(n)≤ g(n) + h * (n) =C* f(n) ≤ C*=f(G), (2) With respect to (1) and (2), f(G 2 ) > f(n), and A * will never select G 2 for expansion

23 23/34 عدم بهينگي در جستجوي گراف توجه شود كه روش A* در جستجوي گراف بهينه نيست چرا كه در اين روش ممكن است يك مسير بهينه به يك حالت تكراري كنار گذاشته شود. ياداوري: در روش مذكور حالت تكراري شناسايي شود مسير جديد كشف شده حذف مي گردد. راه حل: گرهي كه مسير پر هزينه تر دارد كنار گذاشته شود. نحوه عملكرد بدان گونه باشد كه تضمين كند مسير بهينه به حالت تكراري هميشه اولين مسيري است كه دنبال شده است. همانند جستجوي هزينه يكنواخت

24 24/34 Consistent heuristics هيوريستيكهاي سازگار A heuristic is consistent if for every node n, every successor n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n') يعني هميشه حاصلجمع تابع هيوريستيك گره پسين و هزينه رفتن به ان از تابع هيوريستيك والد بيشتر يا مساوي است. »هر هيوريستيك سازگار قابل قبول هم هست. If h is consistent, we have f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n) i.e., f(n) is non-decreasing along any path. يعني مقدار f(n)در طول هر مسيري غير نزولي است. Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal

25 25/34 Optimality of A * A * expands nodes in order of increasing f value Gradually adds "f-contours" of nodes Contour i has all nodes with f=f i, where f i < f i+1

26 26/34 Properties of A* Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) ) Time? Exponential Space? Keeps all nodes in memory الگوريتم بيشتر از آنكه وقت كم بياورد حافظه كم مياورد. Optimal? Yes در ميان الگوريتمهايي كه مسيرهاي جستجو را از ريشه توسعه مي دهند هيچ الگوريتم بهينه ديگري نمي تواند تضمين كند كه تعداد گره هايي كه توسع ميدهد از A* كمتر باشد

27 جستجوی هیوریستیک با حافظه محدود الگوریتمهای مطرح شده مشکل حافظه دارند. دو الگوریتم برای کاهش حافظه مصرفی –Iterative Deepening A* (IDA*) –SMA* مشابه A* با اندازه صف کمتر 27/34

28 Iterative Deepening A* (IDA*) –تفاوت با روش عمیق شونده تکراری برش مورد استفاده هزینه f یعنی g+h است نه عمق. هر تکرار از الگوریتم یک جستجوی عمقی است که تا محدوده هزینه f پیش می رود. 28/34

29 Simplified-Memory-Bounded A* (SMA*) از همه حافظه موجود استفاده می کند. تا جایی که حافظه اجازه می دهد گره ها را گسترش میدهد. هنگامی که حافظه پر شد بدترین گره برگی (لبه) (آنکه مقدار f بالاتری دارد) را حذف می کند (از حافظه پاک می کند) و مقدار این گره را به والدش بر میگرداند. سوال: اگر تمام گره های برگی داری f یکسان باشد چه اتفاقی می افتد؟ SMA* کامل و بهینه است اگر راه حل دست یافتنی موجود و راه حل بهینه دست یافتنی باشد. 29/34

30 SMA* Example حافظه به اندازه 3 گره 30/34

31 SMA* Code 31/34

32 32/34 Admissible heuristics E.g., for the 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance مجموع فاصله كاشيها از موقعيتهاي هدفشان (i.e., no. of squares from desired location of each tile) h 1 (S) = ? h 2 (S) = ?

33 33/34 Admissible heuristics E.g., for the 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h 1 (S) = ? 8 h 2 (S) = ? 3+1+2+2+2+3+3+2 = 18

34 34/34 Dominance effective branching factor b*.  N + 1 = 1 + b* + b* 2 + + (b*) d.  N=total number of nodes generated by A*, d= solution depth, b* is the branching factor that a uniform tree of depth d would have to have in order to contain N+ 1 nodes. If h 2 (n) ≥ h 1 (n) for all n (both admissible) then h 2 dominates h 1, and h 2 is better for search Typical search costs (average number of nodes expanded): d=12IDS = 3,644,035 nodes A * (h 1 ) = 227 nodes A * (h 2 ) = 73 nodes b*=1.24 d=24 IDS = too many nodes A * (h 1 ) = 39,135 nodes A * (h 2 ) = 1,641 nodes b*=1.26


Download ppt "1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami."

Similar presentations


Ads by Google