Download presentation
Presentation is loading. Please wait.
1
School of Computer Science & Engineering
Artificial Intelligence Local Search: A TSP Solution Dae-Won Kim School of Computer Science & Engineering Chung-Ang University
3
Q: Why is the TSP important?
4
Why are we doing TSP-like Projects?
5
Answer: Real-life problems NP-Hard problems Why are the real-life problems difficult?
6
Search space Constraints: hard vs. soft Evaluation function Environment: noisy, time, …
7
What is the size of search space of the TSP with 20 cities?
8
Search space = n! / 2n = (n-1)! / 2
9
If 20 cities, 1016 possible solutions
10
What could be constraints for TSP?
11
e.g., must visit or cannot visit order
12
How to design an evaluation fn?
13
Given “15 – 3 – 11 – 19 – 17” Eval-fn = dist(15,3) + dist(3,11) + … + dist(19,17)
14
What could be environment noise or dynamic factors for TSP?
15
Problem? Model? Solution?
16
Problem Model Solution
17
Problem ModelA SolutionP
18
Problem ModelP SolutionA
19
Q: Which one of the two is better?
20
Problem ModelP1 SolutionA1
… Problem ModelPn SolutionAn
21
Feasible Solution
22
Definition: a solution that satisfies the problem-specific constraints
23
Feasible space (F) Search space (S)
24
F = S for TSP
25
The “search problem” and “optimization problem” are considered synonymous.
26
The search for the best feasible solution is the optimization problem
27
Problem for TSP
28
Given S, and F S, find x F such that eval(x) eval(y) for all y F.
29
The search itself knows nothing
30
Model for TSP
31
We need three factors for modeling
32
Representation Objective function Evaluation function
33
Representation Objective function Evaluation function
permutation: determine a search space Objective function mathematical statement; min dist(x,y) Evaluation function map each tour to its corresponding distance
34
There are many classic algorithms that are designed to search spaces for an optimum solution.
35
They fall into two disjoint classes
36
Algorithms that require the evaluation of partially constructed solutions (exhaustive search)
Algorithms that only evaluate complete solutions (local search)
37
We come up with some search terms:
“Uninformed, Informed, Exhaustive, Local, Blind, Heuristic, Incremental”
38
Exhaustive Search Classical DFS, BFS
Backtracking, branch and bound, A* …
39
We take advantage of an opportunity to organize the search and prune the number of alternative candidates that we need to examine
40
It is often called enumerative search
41
What could be its advantages?
42
It is simple. The only requirement is to generate every possible solution systematically. There are ways to reduce the amount of work you have to do.
43
What could be its disadvantages?
44
Some permutations might not be feasible unless the TSP is fully connected.
Generating all possible permutations of cities are not practical. Try it for n > 100! A fast branch and bound is working??? A smart f = g + h is required.
45
How about greedy algorithms?
46
Attack a problem by constructing the complete solution in a series of steps.
47
Amazingly simple. Assign the values for all of the decision variables one by one and at every step make the best available decision. Does not always return the optimum solution.
48
Greedy Algorithm for TSP
49
The most intuitive greedy algorithm is based on the nearest-neighbor heuristic
50
Starting from a random city, proceed to the nearest unvisited city and continue until every city has been visited, at which point return to the first city
51
Dynamic Programming DP works on the principle of finding an overall solution by operating on an intermediate point that lies between where you are now and where you want to go. Computationally intensive All-pairs shortest path problem
52
Branch and Bound Idea: If we have a solution with a cost of c,
we know that the next solution to try has a lower bound that is greater than c, and we minimizing, we don’t have to compute how bad it actually is. Q: Branch and Bound vs. A*
53
What is local search?
54
In many optimization problems, path (sequence info
In many optimization problems, path (sequence info.) is irrelevant; the goal state itself is the solution
55
e.g., The 0/1 Knapsack Problem
56
State space = set of “complete” solutions
Find optimal solution satisfying constraints Keep a single “current” solution, try to improve it (iterative improvement search)
57
What is the benefit? e.g., Hill-climbing, SA, GA, …
58
Local searches focus our attention within a local neighborhood of some particular solution.
59
Procedure of Local Search
60
1. Pick a solution from the search space and evaluate its merit
1. Pick a solution from the search space and evaluate its merit. Define this as the current solution. 2. Apply a transformation to the current solution to generate a new solution and evaluate its merit. 3. If the new solution is better than the current solution, then exchange it with the current solution; otherwise discard the new solution. 4. Repeat steps 2 and 3 until no transformation in the given set improves the current solution.
61
What is key issues of local search?
62
Type of transformation applied to the current solution
63
Type of transformation: TSP
64
Start with any complete tour, perform pair-wise exchanges
Start with any complete tour, perform pair-wise exchanges. The simplest is called 2-opt (2-interchange)
65
The neighborhood is defined as the set of all tours that can be reached by changing two nonadjacent edges.
66
Your should design the best local transformation for TSP
67
Local search for N-Queens Problem
69
Local Search: Hill-Climbing
71
It is often called as gradient descent
72
We should accept local optima solution
73
What is local optima? Examples?
74
Like climbing Everest in thick fog with amnesia
76
How to avoid local optima?
77
Random-restart may works.
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.