Presentation is loading. Please wait.

Presentation is loading. Please wait.

Search by partial solutions. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General.

Similar presentations


Presentation on theme: "Search by partial solutions. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General."— Presentation transcript:

1 Search by partial solutions

2 Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General Model of Stochastic Local Search Simulated Annealing Tabu search Exhaustive search Hill climbing Random restart General Model of Stochastic Local Search Simulated Annealing Tabu search Exhaustive search Branch and bound Greedy Best first A* Divide and Conquer Dynamic programming Constraint Propagation Exhaustive search Branch and bound Greedy Best first A* Divide and Conquer Dynamic programming Constraint Propagation

3 Search by partial solutions  nodes are partial or complete states  graphs are DAGs (may be trees) source (root) is empty state sinks (leaves) are complete states  directed edges represent setting parameter values

4 4 queens: separate row and column complete solutions possible pruning

5 Implications of partial solutions  pruning of “impossible” partial solutions  need partial evaluation function

6 Partial Solution Trees and DAGs  trees: search might use tree traversal methods based on BFS, DFS advantage: depth is limited! (contrast to complete solution space)  DAGs: spanning trees

7 Partial solution algorithms  greedy  divide and conquer  dynamic programming  branch and bound  A*  constraint propegation

8 Greedy algorithm  make best ‘local’ parameter selection at each step: complete solutions

9 Greedy SAT  partial evaluation  order of setting propositions T/F P = {P 1, P 2,…,P n } f(P) = D 1 ∧ D 2 ∧... ∧ D k e.g., D i = P f ∨ ~P g ∨ P h How much pre-processing?

10 Greedy TSP  partial evaluation  order of adding edges Cities: C 1, C 2,…,C n symmetric distances How much preprocessing? C1C1 C2C2 C2C2

11 Partial solution algorithms greedy  branch and bound  A*  divide and conquer  dynamic programming  constraint propegation

12 4 queens: separate row and column complete solutions possible pruning More in next slide set…

13 Branch and Bound  eliminate subtrees of possible solutions based on evaluation of partial solutions complete solutions

14 e.g., branch and bound TSP complete solutions current best solution distance 498 distance so far 519

15 Branch and Bound requirement  a bound on best solution possible from current partial solution (e.g., if distance so far is 519, total distance is >519)  ‘tight’ as possible  quick to calculate  a current best solution (e.g., 498)

16 Tight bounds  value of partial solution plus  estimate of value of remaining steps must be ‘optimistic estimate’ example: path so far:519 remainder of path: 0 (not tight) bounded estimate:519

17 Optimistic but Tight Bound optimistic tight Current best Actual values if calculated

18 Example - b&b for TSP  Estimate a lower bound for the path length: Fast to calculate Easy to revise ABCDE A712811 B710713 C1210912 D87910 E11131210

19 Example - b&b for TSP 1.Assume shortest edge to each city: (7+7+9+7+10) = 40 A B C D E 2.Assume two shortest edges to each: ((7+8)+(7+7)+(9+10)+(7+8)+(10+11))/2 = 7.5 + 7 + 9.5 + 7.5 + 10.5 = 42 Which is better? ABCDE A71212 81 B71010 71313 C1212 1010 91212 D8791010 E1 1313 1212 1010

20 Tight Bound (7+7+9+7+10) ABCDE A712811 B710713 C1210912 D87910 E11131210 Path so far 0 40 Path so far 11 44 Path so far 11+13 = 24 47 Best Path found 46 (-7+11) (-10+13) A E B

21 B&B algorithm  Depth first traversal of partial solution space - leaves are complete solutions  Subtrees are pruned below a partial solution that cannot be better than the current best solution

22 Partial solution algorithms greedy branch and bound  A*  divide and conquer  dynamic programming  constraint propegation

23 A* algorithm - improved b&b  Ultimate partial solution search  Based on tree searching algorithms you already know - bfs, dfs  Two versions: Simple - used on trees Advanced - used on general graphs

24 general search algorithm for trees* algorithm search (emptyState, fitnessFn()) returns bestState openList = new StateCollection(); openList.insert(emptyState) bestState = null bestFitness = min // assuming maximum wanted while (notEmpty(openList) && resourcesAvailable) state = openList.get() fitness = fitnessFn(state) // partial or complete fitness if (state is complete and fitness > bestFitness ) bestFitness = fitness bestState = state for all values k in domain of next variable nextState = state.include(k) openList.insert(nextState) return solutionState *For graphs, cycles are a problem

25 Versions of general search  Based on the openList collection class Breadth first search Depth first search (->branch and bound) Best first search (informed search)  Best first  A*

26 A*  Best first search with a heuristic fitness function: Admissible (never pessimistic) Tradeoff: Simplicity vs accuracy/tightness  The heuristic heuristic: “Reduced problem” strategy e.g., min path length in TSP

27 Partial solution algorithms greedy branch and bound A*  divide and conquer  dynamic programming  constraint propegation

28 Divide and Conquer 1.subdivide problem into parts 2.solve parts individually 3.reassemble parts examples: sorting – quicksort, mergesort, radix sort ordered trees

29 Partial solution algorithms greedy branch and bound A* divide and conquer  dynamic programming  constraint propegation

30 Dynamic programming  extreme divide-and-conquer: solve all possible subproblems  works on problems where a good solution to a small problem is always a good partial solution to a larger problem

31 Example – Knapsack problem unlimited quantities of items 1, 2, 3,..., N are available each item has weight w 1, w 2, w 3,..., w N each item has value v 1, v 2, v 3,..., v N Knapsack has capacity M most valuable combination of items to pack? x 1, x 2, x 3,..., x N maximize ∑ x j.v i constraint: ∑ x j.w j ≤ M

32 Knapsack Algorithm O(NM)  solve for all sizes up to M with first item  repeat with first two items, three, etc value[M] // array of highest values by capacity lastItem[M] // last item added to achieve value //(needed to determine contents of knapsack) for j = 1 to N for capacity = 1 to M if (capacity – w j ) >= 0 if ( value[capacity] < (value[capacity – w j ] + v j )) value[capacity] = value[capacity – w j ] + v j lastItem[capacity] = j

33 Example item j12345 weight w j 34789 value v j 45101112 Knapsack capacity M = 17 try it...

34 Example item j12345 weight w j 34789 value v j 45101112 Knapsack capacity M = 17 M1234567891011121314151617 value0044488812 16 20 lastItem 00111111111111111 M1234567891011121314151617 value004558910121314161718202122 lastItem 00122122122122122 M1234567891011121314151617 value00455810 121415161820 2224 lastItem 00122132133133133 M1234567891011121314151617 value0045581011121415161820212224 lastItem 00122134133133433 M1234567891011121314151617 value0045581011131415171820212324 lastItem 00122134533533453 N = 1

35 Partial solution algorithms greedy branch and bound A* divide and conquer dynamic programming  constraint propegation next week


Download ppt "Search by partial solutions. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General."

Similar presentations


Ads by Google