Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.

Slides:



Advertisements
Similar presentations
Dynamic Programming 25-Mar-17.
Advertisements

CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
MCS 312: NP Completeness and Approximation algorithms Instructor Neelima Gupta
Types of Algorithms.
Analysis of Algorithms
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Algorithms + L. Grewe.
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
Greedy Algorithms Basic idea Connection to dynamic programming
Greedy Algorithms Basic idea Connection to dynamic programming Proof Techniques.
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Greedy vs Dynamic Programming Approach
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
The Theory of NP-Completeness
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
Fundamental Techniques
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Backtracking.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
16.Greedy algorithms Hsu, Lih-Hsing. Computer Theory Lab. Chapter 16P An activity-selection problem Suppose we have a set S = {a 1, a 2,..., a.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Bold Stroke January 13, 2003 Advanced Algorithms CS 539/441 OR In Search Of Efficient General Solutions Joe Hoffert
Scott Perryman Jordan Williams.  NP-completeness is a class of unsolved decision problems in Computer Science.  A decision problem is a YES or NO answer.
Exploring Algorithms Traveling Salesperson Problem I: Brute Force, Greedy, and Heuristics Except as otherwise noted, the content of this presentation is.
Fundamentals of Algorithms MCS - 2 Lecture # 7
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
 Analysis Wrap-up. What is analysis?  Look at an algorithm and determine:  How much time it takes  How much space it takes  How much programming.
5-1-1 CSC401 – Analysis of Algorithms Chapter 5--1 The Greedy Method Objectives Introduce the Brute Force method and the Greedy Method Compare the solutions.
1 Programming for Engineers in Python Autumn Lecture 12: Dynamic Programming.
Topic 25 Dynamic Programming "Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Advanced Algorithm Design and Analysis (Lecture 14) SW5 fall 2004 Simonas Šaltenis E1-215b
December 14, 2015 Design and Analysis of Computer Algorithm Pradondet Nilagupta Department of Computer Engineering.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
12-CRS-0106 REVISED 8 FEB 2013 CSG523/ Desain dan Analisis Algoritma Dynamic Programming Intelligence, Computing, Multimedia (ICM)
Lecture 7 Jianjun Hu Department of Computer Science and Engineering University of South Carolina CSCE350 Algorithms and Data Structure.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
Backtracking & Brute Force Optimization Intro2CS – weeks
Young CS 331 D&A of Algo. NP-Completeness1 NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and.
Lecture. Today Problem set 9 out (due next Thursday) Topics: –Complexity Theory –Optimization versus Decision Problems –P and NP –Efficient Verification.
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
Dynamic Programming (optimization problem) CS3024 Lecture 10.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
Ch3 /Lecture #4 Brute Force and Exhaustive Search 1.
Algorithms (2ILC0) 2nd Quartile,
Dynamic Programming Sequence of decisions. Problem state.
Lecture on Design and Analysis of Computer Algorithm
Weighted Interval Scheduling
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Types of Algorithms.
Dynamic Programming.
Heuristics Definition – a heuristic is an inexact algorithm that is based on intuitive and plausible arguments which are “likely” to lead to reasonable.
CS 3343: Analysis of Algorithms
Algorithms (2IL15) – Lecture 2
Advanced Algorithms Analysis and Design
Greedy Algorithms Alexandra Stefan.
Chapter 16: Greedy algorithms Ming-Te Chi
Lecture 4 Dynamic Programming
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Dynamic Programming II DP over Intervals
Unit –VII Coping with limitations of algorithm power.
The Greedy Approach Young CS 530 Adv. Algo. Greedy.
Advance Algorithm Dynamic Programming
Presentation transcript:

Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek

Optimization Problems Optimization problem Optimize the “cost function” over all “solutions” Valid solution A solution that satisfies the constraints There can be many! (for a single instance) Example: shortest s-t path ➨ solutions: paths from s to t Cost function Function that measures “quality” of solution Minimization problem: minimize the cost function Maximization problem: maximize the cost function

Traveling Salesman Problem Input: n cities with distances between them Valid solution: tour visiting all cities Cost function: length of tour Goal: minimize cost A B C D E F ABCDEF A06 B09 C60 D90 E04 F This is a hard problem! (In fact, NP-hard)

0/1 Knapsack Problem Input: n items, each with a weight and a profit, and a value W Valid solution: A subset of the items with total weight ≤ W Cost function: Total profit of items in subset Goal: Maximize cost (profit) item weight profit W = 18 Solutions: {1, 2, 6} ➨ profit = 18 {2, 5} ➨ profit = 21 A very common problem in programming contests

Solution strategies Building solutions typically involves making choices Brute force Try all (valid) solutions Always works, but typically very slow Greedy algorithms Construct solution iteratively, always make choice that seems best Often doesn’t work, but typically very fast Dynamic programming Try all choices in a clever way Requires optimal substructure and subproblem encoding

Brute force Brute force strategies Brute force Generate all solutions Check validity (if necessary) Compute cost and keep min/max Backtracking Building solutions involves choices Make choices one at a time Use recursion Discard partial solutions if cannot be valid Branch and bound Like backtracking Check if partial solution can become optimal

TSP Backtracking Choices Assume we start at A What is 2 nd city we visit? 3 rd ? Algorithm TSP(A, i) 1. if i = n then return d(A[n], A[1]) 2. else c = ∞ 3. for j = i to n 4. do “swap A[i] with A[j]” 5. c = min(c, TSP(A, i+1) + d(A[i-1], A[i])) 6. “swap A[i] with A[j]” 7. return c A B C D E F ABCDEF A06 B09 C60 D90 E04 F40 Initial call: TSP(A, 2) Only returns length of optimal tour!

Knapsack Backtracking Choices For each item: in or out? Alternative: which item is next? Algorithm KS(A, i, w) 1. if i = n+1 then return 0 2. else p = KS(A, i+1, w) 3. if w + A[i].weight ≤ W 4. then r = KS(A, i+1, w + A[i].weight) 5. p = max(p, r + A[i].profit) 6. return p item weight profit W = 18 Initial call: KS(A, 1, 0) Only returns optimal profit!

Optimal Substructure Faster than brute force? Generally need optimal substructure Optimal substructure Optimal solution can be obtained by extending optimal solution of a subproblem Subproblems Generally: Problems remaining after making a choice One subproblem for each option of the choice Try extending solution of subproblem with choice Optimal substructure ➨ optimal solution is one of the extensions Sometimes problem must be generalized to describe subproblems

Greedy Algorithms Greedy algorithms For each choice, only consider “best” option Only one subproblem to consider No exponential growth ➨ fast! To keep in mind Requires optimal substructure Must prove correctness of greedy choice Usually doesn’t work! Hint Try to discover structure of optimal solutions What properties do optimal solutions have?

Greedy Pitfall Greedy choice? Most value per weight first? Doesn’t work! Greedy choice: item 1 ➨ profit = 21 Better: items 2 & 3 ➨ profit = 23 item weight profit W = 18 item weight profit W = 9

Proving greedy Greedy algorithms Can seem correct Usually are not Must prove correctness! Correctness greedy algorithm Prove optimal substructure (often true) Prove greedy choice (often not true) Show there exists an optimal solution that agrees with greedy choice Assume an optimal solution without greedy choice … … show how to adapt it to include greedy choice (and still optimal) Don’t need to write down proof! Just be properly convinced.

Fractional knapsack Can take part of an item Solution: how much of each item? Greedy strategy Take as much of item with highest profit to weight ratio as still fits Correctness Assume item i has max. profit/weight = x Assume OPT uses y weight less of item i then greedy choice Take OPT and arbitrarily remove y weight Replace with y weight of item i Removed ≤ y * x, added y * x ➨ optimal item weight profit W = 18

Dynamic programming Terrible name… Also requires optimal substructure Avoid recomputing subproblems by storing solution Running time depends on number of subproblems To keep in mind Must make choices so that subproblems are easily encoded Tries all choices, so is often correct May have to generalize problem Sometimes requires clever tricks Very common in programming contests!

Dynamic programming steps General Approach 1. Determine which choices need to be made (many options!) 2. Find an efficient way to encode all subproblems 3. Let F describe the optimal cost for each subproblem 4. Come up with recursive relation for F (based on choices) 5. Determine order of subproblems (subproblems < problem) 6. Compute solutions for problems without subproblems 7. Use recursive relation to solve all problems in order Implementation Store optimal costs of subproblems in table (multi-dim array) Running time: (size of table) x (time to compute one cell) To obtain actual solution: use additional table with choices

Knapsack DP Choices? Item is in or out (start with last item) Subproblems? F(i, w) = optimal profit with first i items and capacity w Recurrence? F(i, w) = a) F(i-1, w)if w < w i b) max(F(i-1, w), F(i-1, w-w i ) + p i ) if w i ≤ w Order? Increasing in i Small subproblems F(0, w) = 0 for all 0 ≤ w ≤ W item weight profit W = 18 Optimal cost in F(n, W) For 0 ≤ i ≤ n and 0 ≤ w ≤ W

Memoization If you don’t know order Or order is hard to compute Use memoization Note: there IS an order (or it won’t work) How does it work? Initially table is filled with -1 (or a value that cannot occur) F is computed using recursive algorithm For every call, first check if value in table is -1 … If not, return value in table Otherwise, compute value with recurrence and recursive calls Note: can be problem with recursion depth! Try to avoid…

TSP DP? Choices Assume we start at A What is 2 nd city we visit? 3 rd ? Subproblems? We are left with a subset of the cities F(i, S) = shortest tour from city i to city 1 visiting all cities in S Representing subset with bitstring Advanced technique… A B C D E F ABCDEF A06 B09 C60 D90 E04 F40

Decision Problems Decision Problem Determine if a valid solution exists No cost function Optimization Problem ➨ Decision Problem Consider an optimization problem with cost function f Decision problem: Is there a (valid) solution x with f(x) ≤ k? Perform binary search on k to find optimum Hint Decision problems are often easier Always try if a binary search on the cost makes problem easier!

Practice Brute force / Backtracking NWERC 2005 – Bowl Stack BAPC 2011 D – Bad Wiring EAPC 2012 B – Bad Scientist Greedy EAPC 2014 H – Talent selection Dynamic Programming EAPC 2005 H – Venus Rover EAPC 2014 D – Lift problems EAPC 2012 E – Extreme shopping EAPC 2011 J – Shuriken Game EAPC 2011 D – Polly wants a cracker