Presentation is loading. Please wait.

Presentation is loading. Please wait.

MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002.

Similar presentations


Presentation on theme: "MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002."— Presentation transcript:

1 MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002

2 http://coool.mines.edu/report/node3.html

3 Basics of problem Solving-Evaluation Function For every real world problem the evaluation problem is chosen by the designer. It should of course indicate for instance that a solution that meets the objective is better than one that does not. It should also depend on factors such as the computational complexity of the problem. Often the objective function indicates a good evaluation function. Objective - Minimize Stress -> Evaluation Function -Stress

4 Basics of problem Solving-Evaluation Function Other times you cannot derive a useful Evaluation Function from the objective: In the SAT problem the objective is to find a set of boolean (TRUE,FALSE) variables that satisfies a logical statement (makes it TRUE). All wrong candidate solutions return FALSE which does not tell you how to improve the solution.

5 Basics of problem Solving-Defining a Search Problem When you design an evaluation function you need to consider that for many problems the only solutions of interest are the subset that are Feasible (satisfy the constraints). The feasible space can be defined as F where F  S. A search problem can then be defined as: Given a search space S and its feasible part F  S find x  F such that eval(x)  eval(y)THIS IS THE DEF. OF A GLOBAL OPT for all y  F Note that the objective does not appear at all in the formulation!! If your EF does not correspond with the objective you will searching for the answer to the wrong problem.

6 Basics of problem Solving-Defining a Search Problem A point x that satisfies the condition is called a global solution. Finding a global solution can be difficult and impossible to prove in some cases. It would be easier if we could limit the search to a smaller area of S. This fact underlies many search techniques.

7 Basics of problem Solving-Neighborhood Search If we concentrate on the area of S ‘near’ to some point in the search space we can more easily look in this ‘neighborhood’. S x N(x) N(x) of x is a set of all points in the search space that ‘close’ to the given point x. N(x) ={y  S: dist(x,y)  }

8 Basics of problem Solving-Neighborhood Search For a continuous NLP the Euclidean distance can be used to define a neighborhood. For the TSP a 2-swap neighborhood can be defined as all of the candidates that would result from swapping two cities in a given tour. A solution x (a permutation of n=5 cities) 1-2-3-4-5 has n(n-1)/2 neighbors including 1-3-2-4-5 (swapping cities 2 and 3) 5-2-3-4-1 (swapping cities 1 and 5) etc.

9 Basics of problem Solving-Neighborhood Search F=x 2 +3 xcxc Example: Quadratic Objective with no Constraints

10 Basics of problem Solving-Neighborhood Search Step 1: Define a neighborhood around point x c. N(x): x c -   x  x c +  Min F=x 2 +3 xcxc

11 Basics of problem Solving-Neighborhood Search Step 2: Sample a candidate solution from the neighborhood and evaluate it. if F(x 1 )>F(x c ) reject point and choose another. F=x 2 +3 xcxc x1x1

12 Basics of problem Solving-Neighborhood Search if F(x 1 )<F(x c ) accept point and choose replace current point x c with x 1. F=x 2 +3 xcxc x1x1

13 Basics of problem Solving-Neighborhood Search Step 3: Create new neighborhood around x c and repeat process. F=x 2 +3 xcxc

14 Basics of problem Solving-Neighborhood Search Most realistic problems are considerably more difficult than a quadratic bowl problem. The evaluation function defines a response surface that describes the topography of the search space with many hills and valleys.

15 Basics of problem Solving-Neighborhood Search Finding the best peak or the lowest valley is like trying to navigate a mountain range in the dark with only a small lamp. Your decisions must be made using local information. You can sample points in a local area and then decide where to walk next.

16 Basics of problem Solving-Neighborhood Search If you decide to always go uphill then you will reach a peak but not necessarily the highest peak. You may need to walk downhill in order to eventually reach the highest peak in the space.

17 Basics of problem Solving-Local Optima With the notion of neighbor we can define the idea of local optima. A potential solution x  F is a local optima if and only if : eval(x)  eval(y) for all y  N(x) If N(x) is small then it is relatively easy to search it for the best solution but is also easy to get trapped in a local minimum. If N(x) is large then the visibility of the entire design space is increased and the chances of finding the global optima increase. Large N(x) also lead to more computational expense.

18 Basics of problem Solving -Local Optima With a small neighborhood only a local optima is found. N(x)

19 Basics of problem Solving With a large neighborhood the global optima is more likely to be found but with high computational expense. The size of the neighbor hood should fit the problem!!!! N(x)

20 Formal Implementation of Neighborhood Search - Hill Climbing Methods Basic Hill Climbing Methods utilize the concept of a neighborhood search and iterative improvement to find local optima. During each iteration the best solution is selected for the neighborhood N(x) and is used to replace the current solution. If there are no better solutions in N(x) then a local optima has been reached and a new design point is selected at random to start the next iteration. Hill climbing methods are VERY dependent on the starting point of the algorithm and size of the neighborhood. Always go uphill (or downhill in the case of minimization).

21 Hill Climbing Procedure Begin Set t =0; Set best=0; Repeat local = FALSE; Select a current point v c at random; Evaluate v c and set best=eval(v c ); Repeat select all points in the neighborhood of v c ; select the point v n from the set of new points with best value of evaluation function eval if eval(v n ) is better than eval(v c ) then v c = v n else local=TRUE Until local t=t+1 if v c is better than best then best = v c Until t = MAX_ITERATIONS

22 Disadvantages of Hill Climbers They usually terminate at solutions that only locally optimal There is no information as to the amount by which the discovered local optimum deviates from the global optima or other local optima. The optimum that is found depends on the initial configuration. In general, it is not possible to provide an upper bound for the computation time.

23 Advantages of Hill Climbers They are very easy to apply!!!!!

24 Balancing Local and Global Search Effective search techniques balance exploitation and exploration. Exploitation is the process of using the best solution as a jumping of point to finding an improved solution. Exploration is the process of exploring new areas of the search space. Hill climbing methods utilize exploitation by effectively utilizing the current best point, but they can neglect a large portion of the search space.

25 Pure Random Search Pure random search utilizes all exploration and no exploitation. It explored the space thoroughly, but forgoes exploiting promising areas of the design space. x F(x)

26 Random Search Procedure Begin Set t =0; Set best=0; Select an initial point v o at random; Evaluate v o and set best_x=v o ; Set best_f=eval(v o ) Repeat Select a point v c at random t=t+1 Evaluate v c if eval (v c ) is better than best_f then Set best_f = eval (v c ) Set best =v c Until t = MAX_ITERATIONS


Download ppt "MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002."

Similar presentations


Ads by Google