1 IE 607 Heuristic Optimization Introduction to Optimization.

Slides:



Advertisements
Similar presentations
Algorithm Design Methods Spring 2007 CSE, POSTECH.
Advertisements

G5BAIM Artificial Intelligence Methods
Multi-Objective Optimization NP-Hard Conflicting objectives – Flow shop with both minimum makespan and tardiness objective – TSP problem with minimum distance,
1 The TSP : Approximation and Hardness of Approximation All exact science is dominated by the idea of approximation. -- Bertrand Russell ( )
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
S. J. Shyu Chap. 1 Introduction 1 The Design and Analysis of Algorithms Chapter 1 Introduction S. J. Shyu.
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
02/01/11CMPUT 671 Lecture 11 CMPUT 671 Hard Problems Winter 2002 Joseph Culberson Home Page.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Computability and Complexity 13-1 Computability and Complexity Andrei Bulatov The Class NP.
CSE332: Data Abstractions Lecture 27: A Few Words on NP Dan Grossman Spring 2010.
The Theory of NP-Completeness
CSE 830: Design and Theory of Algorithms Dr. Eric Torng.
Analysis of Algorithms CS 477/677
Ant Colony Optimization Optimisation Methods. Overview.
Chapter 11: Limitations of Algorithmic Power
Chapter 11 Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
Ant Colony Optimization: an introduction
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Lecture: 5 Optimization Methods & Heuristic Strategies Ajmal Muhammad, Robert Forchheimer Information Coding Group ISY Department.
Metaheuristics Meta- Greek word for upper level methods
1.1 Chapter 1: Introduction What is the course all about? Problems, instances and algorithms Running time v.s. computational complexity General description.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Genetic Algorithms and Ant Colony Optimisation
Chapter 11 Limitations of Algorithm Power. Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples:
CSCE350 Algorithms and Data Structure
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
TECH Computer Science NP-Complete Problems Problems  Abstract Problems  Decision Problem, Optimal value, Optimal solution  Encodings  //Data Structure.
Object Oriented Programming Assignment Introduction Dr. Mike Spann
Analysis of Algorithms CSCI Previous Evaluations of Programs Correctness – does the algorithm do what it is supposed to do? Generality – does it.
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
CSE 024: Design & Analysis of Algorithms Chapter 9: NP Completeness Sedgewick Chp:40 David Luebke’s Course Notes / University of Virginia, Computer Science.
Computational Complexity Jang, HaYoung BioIntelligence Lab.
1 Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples: b number of comparisons needed to find the.
1 CPSC 320: Intermediate Algorithm Design and Analysis July 28, 2014.
1 Short Term Scheduling. 2  Planning horizon is short  Multiple unique jobs (tasks) with varying processing times and due dates  Multiple unique jobs.
CSCI 3160 Design and Analysis of Algorithms Tutorial 10 Chengyu Lin.
Exact and heuristics algorithms
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Unit 9: Coping with NP-Completeness
Introduction to Optimization
Optimization Problems
Complexity & Computability. Limitations of computer science  Major reasons useful calculations cannot be done:  execution time of program is too long.
Week 13 - Monday.  What did we talk about last time?  B-trees  Hamiltonian tour  Traveling Salesman Problem.
Chapter 11 Introduction to Computational Complexity Copyright © 2011 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1.
Introduction to Optimization 1. Optimization … is a branch of mathematics and computational science that studies methods and techniques specially designed.
IE 312 Review 1. The Process 2 Problem Model Conclusions Problem Formulation Analysis.
DEPARTMENT/SEMESTER ME VII Sem COURSE NAME Operation Research Manav Rachna College of Engg.
Young CS 331 D&A of Algo. NP-Completeness1 NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and.
Onlinedeeneislam.blogspot.com1 Design and Analysis of Algorithms Slide # 1 Download From
Lecture. Today Problem set 9 out (due next Thursday) Topics: –Complexity Theory –Optimization versus Decision Problems –P and NP –Efficient Verification.
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
Conceptual Foundations © 2008 Pearson Education Australia Lecture slides for this course are based on teaching materials provided/referred by: (1) Statistics.
Optimization Problems
Digital Optimization Martynas Vaidelys.
Courtsey & Copyright: DESIGN AND ANALYSIS OF ALGORITHMS Courtsey & Copyright:
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
metaheuristic methods and their applications
Study Guide for ES205 Yu-Chi Ho Jonathan T. Lee Nov. 7, 2000
Heuristics Definition – a heuristic is an inexact algorithm that is based on intuitive and plausible arguments which are “likely” to lead to reasonable.
Objective of This Course
Optimization Problems
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
Multi-Objective Optimization
Chapter 11 Limitations of Algorithm Power
traveling salesman problem
Presentation transcript:

1 IE 607 Heuristic Optimization Introduction to Optimization

2 Objective Function Max (Min) some function of decision variables Subject to (s.t.) equality (=) constraints inequality () constraints Search Space Range or values of decisions variables that will be searched during optimization. Often a calculable size in combinatorial optimization

3 A solution to an optimization problem specifies the values of the decision variables, and therefore also the value of the objective function. A feasible solution satisfies all constraints. An optimal solution is feasible and provides the best objective function value. A near-optimal solution is feasible and provides a superior objective function value, but not necessarily the best. Types of Solutions

4 Optimization problems can be continuous (an infinite number of feasible solutions) or combinatorial (a finite number of feasible solutions). Continuous problem generally maximize or minimize a function of continuous variables such as min 4x + 5y where x and y are real numbers Combinatorial problems generally maximize or minimize a function of discrete variables such as min 4x + 5y where x and y are countable items (e.g. integer only). Continuous vs Combinatorial

5 Combinatorial optimization is the mathematical study of finding an optimal arrangement, grouping, ordering, or selection of discrete objects usually finite in numbers. - Lawler, 1976 In practice, combinatorial problems are often more difficult because there is no derivative information and the surfaces are not smooth. Definition of Combinatorial Optimization

6 Constraints can be hard (must be satisfied) or soft (is desirable to satisfy). Example: In your course schedule a hard constraint is that no classes overlap. A soft constraint is that no class be before 10 AM. Constraints can be explicit (stated in the problem) or implicit (obvious to the problem). Constraints

7 TSP (Traveling Salesman Problem) Given the coordinates of n cities, find the shortest closed tour which visits each once and only once (i.e. exactly once). Example: In the TSP, an implicit constraint is that all cities be visited once and only once.

8 Continuous or Combinatorial Size – number of decision variables, range/count of possible values of decision variables, search space size Degree of constraints – number of constraints, difficulty of satisfying constraints, proportion of feasible solutions to search space Single or Multiple objectives Aspects of an Optimization Problem

9 Deterministic (all variables are deterministic) or Stochastic (the objective function and/or some decision variables and/or some constraints are random variables) Decomposition – decomposite a problem into series problems, and then solve them independently Relaxation – solve problem beyond relaxation, and then can solve it back easier Aspects of an Optimization Problem

10 Simple Few decision variables Differentiable Single modal Objective easy to calculate No or light constraints Feasibility easy to determine Single objective deterministic Hard Many decision variables Discontinuous, combinatorial Multi modal Objective difficult to calculate Severely constraints Feasibility difficult to determine Multiple objective Stochastic

11 For Simple problems, enumeration or exact methods such as differentiation or mathematical programming or branch and bound will work best. For Hard problems, differentiation is not possible and enumeration and other exact methods such as math programming are not computationally practical. For these, heuristics are used.

12 Search is the term used for constructing/improving solutions to obtain the optimum or near-optimum. SolutionEncoding (representing the solution) NeighborhoodNearby solutions (in the encoding or solution space) MoveTransforming current solution to another (usually neighboring) solution EvaluationThe solutions’ feasibility and objective function value Search Basics

13 Constructive search techniques work by constructing a solution step by step, evaluating that solution for (a) feasibility and (b) objective function. Improvement search techniques work by constructing a solution, moving to a neighboring solution, evaluating that solution for (a) feasibility and (b) objective function.

14 Search techniques may be deterministic (always arrive at the same final solution through the same sequence of solutions, although they may depend on the initial solution). Examples are LP (simplex method), tabu search, simple heuristics like FIFO, LIFO, and greedy heuristics. Search techniques may be stochastic where the solutions considered and their order are different depending on random variables. Examples are simulated annealing, ant colony optimization and genetic algorithms.

15 Search techniques may be local, that is, they find the nearest optimum which may not be the real optimum. Example: greedy heuristic (local optimizers). Search techniques may be global, that is, they find the true optimum even if it involves moving to worst solutions during search (non- greedy).

16 Heuristics are rules to search to find optimal or near-optimal solutions. Examples are FIFO, LIFO, earliest due date first, largest processing time first, shortest distance first, etc. Heuristics can be constructive (build a solution piece by piece) or improvement (take a solution and alter it to find a better solution). Heuristics

17 Many constructive heuristics are greedy or myopic, that is, they take the best thing next without regard for the rest of the solution. Example: A constructive heuristic for TSP is to take the nearest city next. An improvement heuristic for TSP is to take a tour and swap the order of two cities.

18 An iterative generation process which guides a subordinate heuristic by combining intelligently different concepts derived from classical heuristics, artificial intelligence, biological evolution, natural and physical sciences for exploring and exploiting the search spaces using learning strategies to structure information in order to find efficiently near-optimal solutions. - Osman and Kelly, 1996 Meta-Heuristics

19 This course will focus on meta-heuristics inspired by nature. Meta-heuristics are not tied to any special problem type and are general methods that can be altered to fit the specific problem. The inspiration from nature is: Simulated Annealing (SA) – molecule/crystal arrangement during cool down Evolutionary Computation (EC) – biological evolution Tabu Search (TS) – long and short term memory Ant Colony and Swarms - individual and group behavior using communication between agents

20 Very flexible Often global optimizers Often robust to problem size, problem instance and random variables May be only practical alternative Advantages of Meta-Heuristics

21 Often need problem specific information / techniques Optimality (convergence) may not be guaranteed Lack of theoretic basis Different searches may yield different solutions to the same problem (stochastic) Stopping criteria Multiple search parameters Disadvantages of Meta-Heuristics

22 Steps: Design the algorithm Encode the algorithm Steps of algorithm Data structures Apply the algorithm Algorithm Design & Analysis

23 Run-time Memory requirements Number of elementary computer operations to solve the problem in the worst case Study increased in the 70’s Algorithm Efficiency

24 Seeks to classify problems in terms of the mathematical order of the computational resources required to solve the problems via computer algorithms Complexity Analysis

25 Problem is a collection of instances that share a mathematical form but differ in size and in the values of numerical constants in the problem form (i.e. generic model) Example: shortest path. Instance is a special case of problem with specified data and parameters. An algorithm solves a problem if the algorithm is guaranteed to find the optimal solution for any instance.

26 Empirical Analysis - see how algorithms perform in practice - write program, test on classes of problem instances Average Case Analysis - determine expected number of steps - choose probability distribution for problem instances, use statistical analysis to derive asymptotic expected run times Complexity Measures

27 Worst Case Analysis - provides upper bound (UB) on the number of steps an algorithm can take on any instance - count largest possible number of steps - provides a “guarantee” on number of steps the algorithm will need Complexity Measures

28 CONs of Empirical Analysis - algorithm performance depends on computer language, compiler, hardware, programmer’s skills - costly and time consuming to do - algorithm comparison can be inconclusive Complexity Measures

29 CONs of Average Case Analysis - depends heavily on choice of probability distribution - hard to pick the probability distribution of realistic problems - analysis often very complex - assumes analyst solving multiple problem instances Complexity Measures

30 Worst Case Analysis PROs - independent of computing environment - relatively easy - guarantee on steps (time) - definitive CONs - simplex method exception - algorithm comparisons can be inconclusive Complexity Measures

31 A theoretical measure of the execution of an algorithm given the problem size n. An algorithm is said to run in O(g(n)) time if f(n) = O(g(n)) (of order g(n)) and there are positive constants c and k, such that for all (i.e. the time taken by the algorithm is at most cg(n) for all ) Big “O” Notation

32 Usually interested in performance on large instances Only consider the dominant term Example:  Big “O” Notation

33 What is a “good” algorithm? It is commonly accepted that worst case performance bounded by a polynomial function of the problem parameters is “good”. We call this a Polynomial-Time Algorithm Example: Strongly preferred because it can handle arbitrarily large data Polynomial vs Exponential- Time Algorithms

34 In Exponential-Time Algorithms, worst case run time grows as a function that cannot be polynomially bounded by the input parameters. Example: Why is a polynomial-time algorithm better than an exponential-time one?  Exponential time algorithms have an explosive growth rate Polynomial vs Exponential- Time Algorithms

35 n=5n=10n=100 n=1000 n n n n x x n! x x x

36 Optimization Problem A computational problem in which the object is to find the best of all possible solutions. (i.e. find a solution in the feasible region which has the minimum or maximum value of the objective function.) Decision Problem A problem with a “yes” or “no” answer. Optimization vs Decision Problems

37 Convert Optimization Problems into equivalent Decision Problems What is the optimal value?  Is there a feasible solution to the problem with an objective function value equal to or superior to a specified threshold?

38 The class of decision problems for which we can find a solution in polynomial time. i.e. P includes all decision problems for which there is an algorithm that halts with the correct yes/no answer in a number of steps bounded by a polynomial in the problem size n. The Class P in general is thought of as being composed of relatively “easy” problems for which efficient algorithms exist. Class P

39 Shortest path Minimum spanning tree Network flow Transportation, assignment and transshipment Some single machine scheduling problems Examples of Class P Problems

40 NP = Nondeterministic Polynomial NP is the class of decision problems for which we can check solutions in polynomial time. i.e. easy to verify but not necessarily easy to solve Class NP

41 Formally, it is the set of decision problems such that if x is a “yes” instance then this could be verified in polynomial time if a clue or certificate whose size is polynomial in the size of x is appended to the problem input. NP includes all those decision problems that could be polynomial-time solved if the right (polynomial-length) clue is appended to the problem input. Class NP

42 Class P contains all those that have been conquered with well-bounded, constructive algorithms. Class NP includes the decision problem versions of virtually all the widely studied combinatorial optimization problems. P is a subset of NP. Class P vs Class NP

43 Problem P reduces in polynomial-time to another problem P´, if and only if, - there is an algorithm for problem P which uses problem P´ as a subroutine, - each call to the subroutine of problem P´ counts as a single step, - this algorithm for problem P´ runs in polynomial-time. Polynomial Reduction

44 If problem P polynomially reduces to problem P´ and there is a polynomial-time algorithm for problem P´, then there is a polynomial-time algorithm for problem P.  Problem P´ is at least as hard as problem P! Polynomial Reduction

45 A problem P is said to be NP-Hard if all members of NP polynomially reduce to this problem. A problem P is said to be NP-Complete if (a) P ∈ NP, and (b) P is NP-Hard. NP Hard vs NP Complete P NPNP- Hard NP Complete

46 If there is an efficient (i.e. polynomial) algorithm for some NP-Complete problem, then there is a polynomial algorithm existing for all problems in NP.  P = NP Examples of NP-Hard problems: TSP, graph coloring, set covering and partitioning, knapsack, precedence-constrained scheduling, etc.