Presentation is loading. Please wait.

Presentation is loading. Please wait.

School of Computing Clemson University Fall, 2012

Similar presentations


Presentation on theme: "School of Computing Clemson University Fall, 2012"— Presentation transcript:

1 School of Computing Clemson University Fall, 2012
Lecture 14. Iterative Refinement, Physically- and Biologically-Inspired Algorithms CpSc 212: Algorithms and Data Structures Brian C. Dean School of Computing Clemson University Fall, 2012

2 Physical Computation Many different physical processes in nature can be made to perform efficient computation… Ball and string models for shortest paths (gravity) Linkages; Kempe’s universality theorem Diffraction gratings and Fourier transforms Chemical computing (e.g., finding Hamiltonian paths using DNA binding) L.M. Adleman. Molecular computation of solutions to combinatorial problems. Science 11 (1994). Hecht. Optics, 3rd edition

3 Biological Computation
Many different processes in biology are quite good at performing computation or optimization… Evolution (Spencer/Darwin: “survival of the fittest”) Finding shortest paths with slime molds Bees solving the traveling salesman problem And of course, your brain…

4 Physical and Biologically Inspired
Computing Paradigms Simulated Annealing Genetic/Evolutionary Algorithms Artificial Immune Systems Swarm Intelligence Ant Colony Optimization Neural Networks

5 Iterative Refinement Simple yet powerful idea:
Start with some arbitrary feasible solution (or better yet, a solution obtained via some other heuristic) As long as we can improve it, keep making it better. Simple example: bubble sort. Getting stuck in local minima can be a problem. Many approaches refine a population of solutions, rather than a single solution.

6 Iterative Refinement in Continuous
Optimization To maximize / minimize a function, follow the derivative (gradient, in higher dimensions). Careful with step size. Solving and optimizing not much different: for example, to solve f(x) = 0, minimize |f(x)|2. x x F’(x) = -0.3 F(x) = [+0.5, 0]

7 Iterative Refinement in Continuous
Optimization If second derivatives available we can use Newton’s method and locally approximate as a quadratic function. Typically converges in fewer steps Each step more computationally intensive though. x x

8 Mean Shift Clustering: Finding Local
Maxima in an Implicit Density Function Dislodge each point one by one, and simply “turn on gravity”. Cluster centers are “basins of attraction”.

9 Mean Shift Clustering: Finding Local
Maxima in an Implicit Density Function Dislodge each point one by one, and simply “turn on gravity”. Cluster centers are “basins of attraction”.

10 Jacobi and Gauss-Seidel Relaxation For Solving Linear Systems
Consider solving an N x N linear system Ax = b. This normally takes O(N3) time with Gaussian elimination, although in theory it can be done a bit faster… A much faster solution in practice uses iterative refinement: Guess an initial solution x = (x1…xN) Solve equation 1 for x1 to get y1. Solve equation 2 for x2 to get y2. Etc… Then take y = (y1…yN) as your next guess. Best known lower bound: Ω(n2) (trivial) Strassen 1969: O(n2.81) Coppersmith and Winograd 1990: O(n2.376) (However, it’s quite complicated and not very practical) Stothers 2011: O(n2.374) Williams 2012: O(n2.373) (Frighteningly complicated…)

11 Neighborhood Search Often we don’t have a continuous function whose derivative tells us which way to go… Start with some arbitrary non-optimal solution x. Repeatedly: Search a small “neighborhood” around x for a better solution. If we find one, set x to the best such solution. Do we move right away, or wait until we’ve searched all neighbors though? To mitigate local minima, we again usually run many times from several starting points x.

12 Local Search – Neighborhood Examples
Traveling salesman: Rank aggregation: Max independent set: Balanced clustering

13 Another Prominent Example: Clustering
Goal: partition n points p1…pn in d-dimensional space into k clusters, and for each cluster determine an appropriate “center” point. Popular objectives to minimize: k-means. k-medians. k-center.

14 Iterative Refinement Heuristics for Center-Based Clustering
It is NP-hard to find an optimal clustering according to any of the 3 previous objectives. So we typically look for a reasonable solution using the following popular heuristic: (for k-means, known as “Lloyd’s heuristic” or simply “the k-means algorithm”) Cluster Centers Partition of points into k sets

15 Tabu Search Like neighborhood search, only you are allowed to move to worse solutions. However, we keep a “tabu list” of recent places we’ve been, to prevent us from immediately turning around and falling back into the same locally-optimal solution.

16 Simulated Annealing Like neighborhood search, only you are allowed to move to worse solutions (with a goal of escaping local minima). Probability of moving to a worse solution proportional to how much worse it is, and this probably decreases over time as well according to a “cooling” schedule. Think of this as a “random walk” through different solutions, where it’s much more likely to move in an improving direction.

17 Genetic Algorithms Start with a population of initial solutions.
Repeatedly: Kill off some number of the least fit solutions (probability of survival proportional to “fitness”) Replace them with new solutions obtained by “mating” pairs of surviving solutions, or by mutating single solutions. Getting all the parameters tuned just right can take some work, but these are quite effective for a wide range of problems…

18 Crossover Example Sardine Cookies

19 Liver and Chocolate Chips
Crossover Example Sardine Cookies Liver and Chocolate Chips

20 Particle Swarm Optimization
Particle Swarm Optimization methods also maintain a population of solutions, but: No crossover. Solutions move according to velocities that are constantly (and randomly) adjusted based on gravitational pull towards good solutions found so far.

21 “Ameoba Search” How can you use a GA-like approach to minimize a function f(x) over x in [0, 1]? In higher dimensions, this leads to an idea known as downhill simplex search, the Nelder-Mead algorithm, or “amoeba” search. Advantage: no derivative computation necessary!


Download ppt "School of Computing Clemson University Fall, 2012"

Similar presentations


Ads by Google