Download presentation
Presentation is loading. Please wait.
Published byGwendoline Stokes Modified over 9 years ago
1
CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches
2
Administrivia Visual Studio If you’ve never been granted permission before, you should receive email shortlyIf you’ve never been granted permission before, you should receive email shortly Assign 1 Set up your submit account passwordSet up your submit account password Visit: www.cs.virginia.edu/~cs416/submit.htmlVisit: www.cs.virginia.edu/~cs416/submit.htmlToolkit You should see that you’re enrolled through Toolkit access to class gradebookYou should see that you’re enrolled through Toolkit access to class gradebook Visual Studio If you’ve never been granted permission before, you should receive email shortlyIf you’ve never been granted permission before, you should receive email shortly Assign 1 Set up your submit account passwordSet up your submit account password Visit: www.cs.virginia.edu/~cs416/submit.htmlVisit: www.cs.virginia.edu/~cs416/submit.htmlToolkit You should see that you’re enrolled through Toolkit access to class gradebookYou should see that you’re enrolled through Toolkit access to class gradebook
3
Example 100 90 0 A B C 10 5 95 Goal h(n) c(n, n’) f(n)
4
A* w/o Admissibility 100 90 0 A B C 10 5 95 Goal B = goal f(B) = 10 Are we done? No Must explore C
5
A* w/ Admissibility 10 90 0 A B C 10 5 95 Goal B = goal f(B) = 10 Are we done? Yes h(C) indicates best path cost of 95
6
A* w/o Consistency 0 A 101 B 200 C 100 D 5 45 5
7
A* w/ Consistency 0 A 101 B 105 C 100 D 5 45 5
8
Pros and Cons of A* A* is optimal and optimally efficient A* is still slow and bulky (space kills first) Number of nodes grows exponentially with the length to goalNumber of nodes grows exponentially with the length to goal –This is actually a function of heuristic, but they all make mistakes A* must search all nodes within this goal contourA* must search all nodes within this goal contour Finding suboptimal goals is sometimes only feasible solutionFinding suboptimal goals is sometimes only feasible solution Sometimes, better heuristics are non-admissibleSometimes, better heuristics are non-admissible A* is optimal and optimally efficient A* is still slow and bulky (space kills first) Number of nodes grows exponentially with the length to goalNumber of nodes grows exponentially with the length to goal –This is actually a function of heuristic, but they all make mistakes A* must search all nodes within this goal contourA* must search all nodes within this goal contour Finding suboptimal goals is sometimes only feasible solutionFinding suboptimal goals is sometimes only feasible solution Sometimes, better heuristics are non-admissibleSometimes, better heuristics are non-admissible
9
Memory-bounded Heuristic Search Try to reduce memory needs Take advantage of heuristic to improve performance Iterative-deepening A* (IDA*)Iterative-deepening A* (IDA*) Recursive best-first search (RBFS)Recursive best-first search (RBFS) SMA*SMA* Try to reduce memory needs Take advantage of heuristic to improve performance Iterative-deepening A* (IDA*)Iterative-deepening A* (IDA*) Recursive best-first search (RBFS)Recursive best-first search (RBFS) SMA*SMA*
10
Iterative Deepening A* Iterative Deepening Remember, as in uniformed search, this was a depth-first search where the max depth was iteratively increasedRemember, as in uniformed search, this was a depth-first search where the max depth was iteratively increased As an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f- cost of nodes expanded at last iterationAs an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f- cost of nodes expanded at last iteration –What happens when f-cost is real-valued? Iterative Deepening Remember, as in uniformed search, this was a depth-first search where the max depth was iteratively increasedRemember, as in uniformed search, this was a depth-first search where the max depth was iteratively increased As an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f- cost of nodes expanded at last iterationAs an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f- cost of nodes expanded at last iteration –What happens when f-cost is real-valued?
11
Recursive best-first search Depth-first combined with best alternative Keep track of options along fringeKeep track of options along fringe As soon as current depth-first exploration becomes more expensive of best fringe optionAs soon as current depth-first exploration becomes more expensive of best fringe option –back up to fringe, but update node costs along the way Depth-first combined with best alternative Keep track of options along fringeKeep track of options along fringe As soon as current depth-first exploration becomes more expensive of best fringe optionAs soon as current depth-first exploration becomes more expensive of best fringe option –back up to fringe, but update node costs along the way
12
Recursive best-first search box contains f-value of best alternative path available from any ancestorbox contains f-value of best alternative path available from any ancestor First, explore path to PitestiFirst, explore path to Pitesti Backtrack to Fagaras and update FagarasBacktrack to Fagaras and update Fagaras Backtrack to Pitesti and update PitestiBacktrack to Pitesti and update Pitesti box contains f-value of best alternative path available from any ancestorbox contains f-value of best alternative path available from any ancestor First, explore path to PitestiFirst, explore path to Pitesti Backtrack to Fagaras and update FagarasBacktrack to Fagaras and update Fagaras Backtrack to Pitesti and update PitestiBacktrack to Pitesti and update Pitesti
13
Quality of Iterative Deepening A* and Recursive best-first search RBFS O(bd) space complexity [if h(n) is admissible]O(bd) space complexity [if h(n) is admissible] Time complexity is hard to describeTime complexity is hard to describe –efficiency is heavily dependent on quality of h(n) –same states may be explored many times IDA* and RBFS use too little memoryIDA* and RBFS use too little memory –even if you wanted to use more than O(bd) memory, these two could not provide any advantage RBFS O(bd) space complexity [if h(n) is admissible]O(bd) space complexity [if h(n) is admissible] Time complexity is hard to describeTime complexity is hard to describe –efficiency is heavily dependent on quality of h(n) –same states may be explored many times IDA* and RBFS use too little memoryIDA* and RBFS use too little memory –even if you wanted to use more than O(bd) memory, these two could not provide any advantage
14
Simple Memory-bounded A* Use all available memory Follow A* algorithm and fill memory with new expanded nodesFollow A* algorithm and fill memory with new expanded nodes If new node does not fitIf new node does not fit –free() stored node with worst f-value –propagate f-value of freed node to parent SMA* will regenerate a subtree only when it is neededSMA* will regenerate a subtree only when it is needed –the path through deleted subtree is unknown, but cost is known Use all available memory Follow A* algorithm and fill memory with new expanded nodesFollow A* algorithm and fill memory with new expanded nodes If new node does not fitIf new node does not fit –free() stored node with worst f-value –propagate f-value of freed node to parent SMA* will regenerate a subtree only when it is neededSMA* will regenerate a subtree only when it is needed –the path through deleted subtree is unknown, but cost is known
15
Thrashing Typically discussed in OS w.r.t. memory The cost of repeatedly freeing and regenerating parts of the search tree dominate the cost of actual searchThe cost of repeatedly freeing and regenerating parts of the search tree dominate the cost of actual search time complexity will scale significantly if thrashingtime complexity will scale significantly if thrashing –So we saved space with SMA*, but if the problem is large, it will be intractable from the point of view of computation time Typically discussed in OS w.r.t. memory The cost of repeatedly freeing and regenerating parts of the search tree dominate the cost of actual searchThe cost of repeatedly freeing and regenerating parts of the search tree dominate the cost of actual search time complexity will scale significantly if thrashingtime complexity will scale significantly if thrashing –So we saved space with SMA*, but if the problem is large, it will be intractable from the point of view of computation time
16
Meta-foo What does meta mean in AI? Frequently it means step back a level from fooFrequently it means step back a level from foo Metareasoning = reasoning about reasoningMetareasoning = reasoning about reasoning These informed search algorithms have pros and cons regarding how they choose to explore new levelsThese informed search algorithms have pros and cons regarding how they choose to explore new levels –a metalevel learning algorithm may combine learn how to combine techniques and parameterize search What does meta mean in AI? Frequently it means step back a level from fooFrequently it means step back a level from foo Metareasoning = reasoning about reasoningMetareasoning = reasoning about reasoning These informed search algorithms have pros and cons regarding how they choose to explore new levelsThese informed search algorithms have pros and cons regarding how they choose to explore new levels –a metalevel learning algorithm may combine learn how to combine techniques and parameterize search
17
Heuristic Functions 8-puzzle problem Avg Depth=22 Branching = approx 3 3 22 states 170,000 repeated
18
Heuristics The number of misplaced tiles Admissible because at least n moves required to solve n misplaced tilesAdmissible because at least n moves required to solve n misplaced tiles The distance from each tile to its goal position No diagonals, so use Manhattan DistanceNo diagonals, so use Manhattan Distance –As if walking around rectilinear city blocks also admissiblealso admissible The number of misplaced tiles Admissible because at least n moves required to solve n misplaced tilesAdmissible because at least n moves required to solve n misplaced tiles The distance from each tile to its goal position No diagonals, so use Manhattan DistanceNo diagonals, so use Manhattan Distance –As if walking around rectilinear city blocks also admissiblealso admissible
19
Compare these two heuristics Effective Branching Factor, b* If A* explores N nodes to find the goal at depth dIf A* explores N nodes to find the goal at depth d –b* = branching factor such that a uniform tree of depth d contains N+1 nodes N+1 = 1 + b* + (b*) 2 + … + (b*) d b* close to 1 is idealb* close to 1 is ideal –because this means the heuristic guided the A* search linearly –If b* were 100, on average, the heuristic had to consider 100 children for each node –Compare heuristics based on their b* Effective Branching Factor, b* If A* explores N nodes to find the goal at depth dIf A* explores N nodes to find the goal at depth d –b* = branching factor such that a uniform tree of depth d contains N+1 nodes N+1 = 1 + b* + (b*) 2 + … + (b*) d b* close to 1 is idealb* close to 1 is ideal –because this means the heuristic guided the A* search linearly –If b* were 100, on average, the heuristic had to consider 100 children for each node –Compare heuristics based on their b*
20
Compare these two heuristics
21
h 2 is always better than h 1 for any node, n, h 2 (n) >= h 1 (n)for any node, n, h 2 (n) >= h 1 (n) h 2 dominates h 1h 2 dominates h 1 Recall all nodes with f(n) < C* will be expanded?Recall all nodes with f(n) < C* will be expanded? –This means all nodes, h(n) + g(n) < C*, will be expanded All nodes where h(n) < C* - g(n) will be expanded –All nodes h 2 expands will also be expanded by h 1 and because h 1 is smaller, others will be expanded as well h 2 is always better than h 1 for any node, n, h 2 (n) >= h 1 (n)for any node, n, h 2 (n) >= h 1 (n) h 2 dominates h 1h 2 dominates h 1 Recall all nodes with f(n) < C* will be expanded?Recall all nodes with f(n) < C* will be expanded? –This means all nodes, h(n) + g(n) < C*, will be expanded All nodes where h(n) < C* - g(n) will be expanded –All nodes h 2 expands will also be expanded by h 1 and because h 1 is smaller, others will be expanded as well
22
Inventing admissible heuristic funcs How can you create h(n)? Simplify problem by reducing restrictions on actionsSimplify problem by reducing restrictions on actions –Allow 8-puzzle pieces to sit atop on another –Call this a relaxed problem –The cost of optimal solution to relaxed problem is admissible heuristic for original problem It is at least as expensive for the original problem How can you create h(n)? Simplify problem by reducing restrictions on actionsSimplify problem by reducing restrictions on actions –Allow 8-puzzle pieces to sit atop on another –Call this a relaxed problem –The cost of optimal solution to relaxed problem is admissible heuristic for original problem It is at least as expensive for the original problem
23
Examples of relaxed problems A tile can move from square A to square B if A is horizontally or vertically adjacent to B and B is blank A tile can move from A to B if A is adjacent to B (overlap)A tile can move from A to B if A is adjacent to B (overlap) A tile can move from A to B if B is blank (teleport)A tile can move from A to B if B is blank (teleport) A tile can move from A to B (teleport and overlap)A tile can move from A to B (teleport and overlap) Solutions to these relaxed problems can be computed without search and therefore heuristic is easy to compute A tile can move from square A to square B if A is horizontally or vertically adjacent to B and B is blank A tile can move from A to B if A is adjacent to B (overlap)A tile can move from A to B if A is adjacent to B (overlap) A tile can move from A to B if B is blank (teleport)A tile can move from A to B if B is blank (teleport) A tile can move from A to B (teleport and overlap)A tile can move from A to B (teleport and overlap) Solutions to these relaxed problems can be computed without search and therefore heuristic is easy to compute
24
Multiple Heuristics If multiple heuristics available: h(n) = max {h 1 (n), h 2 (n), …, h m (n)}h(n) = max {h 1 (n), h 2 (n), …, h m (n)} If multiple heuristics available: h(n) = max {h 1 (n), h 2 (n), …, h m (n)}h(n) = max {h 1 (n), h 2 (n), …, h m (n)}
25
Use solution to subproblem as heuristic What is optimal cost of solving some portion of original problem? subproblem solution is heuristic of original problemsubproblem solution is heuristic of original problem What is optimal cost of solving some portion of original problem? subproblem solution is heuristic of original problemsubproblem solution is heuristic of original problem
26
Pattern Databases Store optimal solutions to subproblems in database We use an exhaustive search to solve every permutation of the 1,2,3,4-piece subproblem of the 8-puzzleWe use an exhaustive search to solve every permutation of the 1,2,3,4-piece subproblem of the 8-puzzle During solution of 8-puzzle, look up optimal cost to solve the 1,2,3,4-piece subproblem and use as heuristicDuring solution of 8-puzzle, look up optimal cost to solve the 1,2,3,4-piece subproblem and use as heuristic Store optimal solutions to subproblems in database We use an exhaustive search to solve every permutation of the 1,2,3,4-piece subproblem of the 8-puzzleWe use an exhaustive search to solve every permutation of the 1,2,3,4-piece subproblem of the 8-puzzle During solution of 8-puzzle, look up optimal cost to solve the 1,2,3,4-piece subproblem and use as heuristicDuring solution of 8-puzzle, look up optimal cost to solve the 1,2,3,4-piece subproblem and use as heuristic
27
Learning Could also build pattern database while solving cases of the 8-puzzle Must keep track of intermediate states and true final cost of solutionMust keep track of intermediate states and true final cost of solution Inductive learning builds mapping of state -> costInductive learning builds mapping of state -> cost Because too many permutations of actual statesBecause too many permutations of actual states –Construct important features to reduce size of space Could also build pattern database while solving cases of the 8-puzzle Must keep track of intermediate states and true final cost of solutionMust keep track of intermediate states and true final cost of solution Inductive learning builds mapping of state -> costInductive learning builds mapping of state -> cost Because too many permutations of actual statesBecause too many permutations of actual states –Construct important features to reduce size of space
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.