Introduction to the Design and Analysis of Algorithms

Slides:



Advertisements
Similar presentations
A simple example finding the maximum of a set S of n numbers.
Advertisements

Divide-and-Conquer The most-well known algorithm design strategy:
Strassen's Matrix Multiplication Sibel KIRMIZIGÜL.
Introduction to Algorithms
Chapter 4: Divide and Conquer Master Theorem, Mergesort, Quicksort, Binary Search, Binary Trees The Design and Analysis of Algorithms.
RAIK 283: Data Structures & Algorithms
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 4 Divide-and-Conquer Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Chapter 4 Divide-and-Conquer Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Chapter 4: Divide and Conquer The Design and Analysis of Algorithms.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
CS 104 Introduction to Computer Science and Graphics Problems Data Structure & Algorithms (3) Recurrence Relation 11/11 ~ 11/14/2008 Yang Song.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design techniqueDynamic Programming is a.
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
Divide-and-Conquer 7 2  9 4   2   4   7
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Chapter.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Dynamic Programming. Many problem can be solved by D&C – (in fact, D&C is a very powerful approach if you generalize it since MOST problems can be solved.
BINARY SEARCH CS16: Introduction to Data Structures & Algorithms Thursday February 12,
CMPT 438 Algorithms.
Introduction to Algorithms: Divide-n-Conquer Algorithms
Introduction to Algorithms
Analysis of Algorithms
Lecture 12.
Subject Name: Design and Analysis of Algorithm Subject Code: 10CS43
Analysis of Algorithms
UNIT- I Problem solving and Algorithmic Analysis
Chapter 4 Divide-and-Conquer
Divide-and-Conquer 6/30/2018 9:16 AM
CS 3343: Analysis of Algorithms
Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Seminar on Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Divide-and-Conquer The most-well known algorithm design strategy:
Chapter 4 Divide-and-Conquer
Divide-and-Conquer The most-well known algorithm design strategy:
CS200: Algorithm Analysis
CS 3343: Analysis of Algorithms
Chapter 4: Divide and Conquer
Chapter 8 Dynamic Programming
Divide-and-Conquer 7 2  9 4   2   4   7
Topic: Divide and Conquer
Decrease-and-Conquer
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Divide-and-Conquer The most-well known algorithm design strategy:
Dynamic Programming.
Chapter 4 Divide-and-Conquer
CSE 2010: Algorithms and Data Structures
Divide and Conquer Merge sort and quick sort Binary search
Divide-and-Conquer The most-well known algorithm design strategy:
Divide-and-Conquer 7 2  9 4   2   4   7
Dynamic Programming.
Trevor Brown CS 341: Algorithms Trevor Brown
Divide & Conquer Algorithms
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
CSC 380: Design and Analysis of Algorithms
CSC 380: Design and Analysis of Algorithms
Divide-and-Conquer 7 2  9 4   2   4   7
Divide and Conquer Merge sort and quick sort Binary search
Seminar on Dynamic Programming.
Divide and Conquer Merge sort and quick sort Binary search
Divide-and-Conquer 7 2  9 4   2   4   7
Presentation transcript:

Introduction to the Design and Analysis of Algorithms ElSayed Badr Benha University September 20, 2016

What is the difference between Brute force and Dived-Conquer Techniques ?

Brute Force Approaches A straightforward approach, usually based directly on the problem’s statement and definitions of the concepts involved ( Generate and test ) Examples: Multiplying two matrices Selection and Bubble sorts Searching for a key of a given value in a list (Linear Search )

Divide-and-Conquer Divide-and conquer is a general algorithm design paradigm: Divide: divide the input data S in two or more disjoint subsets S1, S2, … Recur: solve the subproblems recursively Conquer: combine the solutions for S1, S2, …, into a solution for S The base case for the recursion are subproblems of constant size Analysis can be done using recurrence equations

Divide-and-Conquer Examples: Multiplying two matrices using Strassen’s algorithm for two 2x2 matrices (1969): Merge and Quick sorts Searching for a key of a given value in a list (Binary Search )

Linear Search

Binary Search Very efficient algorithm for searching in sorted array: K vs A[0] . . . A[m] . . . A[n-1] If K = A[m], stop (successful search); otherwise, continue searching by the same method in A[0..m-1] if K < A[m] and in A[m+1..n-1] if K > A[m] l  0; r  n-1 while l  r do m  (l+r)/2 if K = A[m] return m else if K < A[m] r  m-1 else l  m+1 return -1

Analysis of Binary Search Time efficiency worst-case recurrence: Cw (n) = 1 + Cw( n/2 ), Cw (1) = 1 solution: Cw(n) = log2(n+1) This is VERY fast: e.g., Cw(106) = 20 Optimal for searching a sorted array Limitations: must be a sorted array (not linked list) Bad (degenerate) example of divide-and-conquer because only one of the sub-instances is solved Has a continuous counterpart called bisection method for solving equations in one unknown f(x) = 0 (see Sec. 12.4)

Comparison between Binary and Linear Search Binary Search O(log n) Linear Search O(n) Log n n 3 10 6 100 9 1000 13 10,000 16 100,000

Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated as recurrences with overlapping sub-instances. Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems and later assimilated by CS “Programming” here means “planning” Main idea: set up a recurrence relating a solution to a larger instance to solutions of some smaller instances - solve smaller instances once record solutions in a table extract solution to the initial instance from that table

Example: Fibonacci numbers Recall definition of Fibonacci numbers: F(n) = F(n-1) + F(n-2) F(0) = 0 F(1) = 1 Computing the nth Fibonacci number recursively (top-down): F (n) F (n-1) + F (n-2) F (n-2) + F (n-3) F (n-3) + F (n-4) ...

Example: Fibonacci numbers (cont.) Computing the nth Fibonacci number using bottom-up iteration and recording results: F(0) = 0 F(1) = 1 F(2) = 1+0 = 1 … F(n-2) = F(n-1) = F(n) = F(n-1) + F(n-2) Efficiency: - time - space What if we solve it recursively? n

Computing a binomial coefficient by DP Binomial coefficients are coefficients of the binomial formula: (a + b)n = C (n,0)anb0 + . . . + C (n,k)an-kbk + . . . + C (n,n)a0bn Recurrence: C (n,k) = C (n-1,k) + C (n-1,k-1) for n > k > 0 C(n,0) = 1, C(n,n) = 1 for n  0 Value of C(n,k) can be computed by filling a table: 0 1 2 . . . k-1 k 0 1 1 1 1 . n-1 C (n-1,k-1) C (n-1,k) n C (n,k)

Computing C (n,k): pseudocode and analysis Time efficiency: Θ(nk) Space efficiency: Θ(nk)