Ch03-Algorithms 1. Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a.

Slides:



Advertisements
Similar presentations
College of Information Technology & Design
Advertisements

College of Information Technology & Design
Discrete Structures CISC 2315
Razdan with contribution from others 1 Algorithm Analysis What is the Big ‘O Bout? Anshuman Razdan Div of Computing.
February 19, 2015Applied Discrete Mathematics Week 4: Number Theory 1 The Growth of Functions Question: If f(x) is O(x 2 ), is it also O(x 3 )? Yes. x.
Discrete Mathematics CS 2610 February 26, part 1.
CompSci 102 Discrete Math for Computer Science
Chapter 1 – Basic Concepts
the fourth iteration of this loop is shown here
The Growth of Functions
1 ICS 353 Design and Analysis of Algorithms Spring Semester (062) King Fahd University of Petroleum & Minerals Information & Computer Science.
Introduction to Analysis of Algorithms
CSE115/ENGR160 Discrete Mathematics 03/03/11 Ming-Hsuan Yang UC Merced 1.
1 Data Structures A program solves a problem. A program solves a problem. A solution consists of: A solution consists of:  a way to organize the data.
25 June 2015Comp 122, Spring 2004 Asymptotic Notation, Review of Functions & Summations.
Data Structures and Algorithms1 Basics -- 2 From: Data Structures and Their Algorithms, by Harry R. Lewis and Larry Denenberg (Harvard University: Harper.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
The Fundamentals: Algorithms, the Integers & Matrices.
Algorithms Chapter 3 With Question/Answer Animations.
February 17, 2015Applied Discrete Mathematics Week 3: Algorithms 1 Double Summations Table 2 in 4 th Edition: Section th Edition: Section th.
Algorithm analysis and design Introduction to Algorithms week1
Chapter 2 The Fundamentals: Algorithms, the Integers, and Matrices
Chapter Complexity of Algorithms –Time Complexity –Understanding the complexity of Algorithms 1.
CSC 201 Analysis and Design of Algorithms Lecture 03: Introduction to a CSC 201 Analysis and Design of Algorithms Lecture 03: Introduction to a lgorithms.
Discrete Mathematics Algorithms. Introduction  An algorithm is a finite set of instructions with the following characteristics:  Precision: steps are.
Discrete Structures Lecture 11: Algorithms Miss, Yanyan,Ji United International College Thanks to Professor Michael Hvidsten.
2.3 Functions A function is an assignment of each element of one set to a specific element of some other set. Synonymous terms: function, assignment, map.
Lecture 2 Computational Complexity
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
1 Computer Algorithms Lecture 3 Asymptotic Notation Some of these slides are courtesy of D. Plaisted, UNC and M. Nicolescu, UNR.
The Growth of Functions Rosen 2.2 Basic Rules of Logarithms log z (xy) log z (x/y) log z (x y ) If x = y If x < y log z (-|x|) is undefined = log z (x)
DISCRETE MATHEMATICS I CHAPTER 11 Dr. Adam Anthony Spring 2011 Some material adapted from lecture notes provided by Dr. Chungsim Han and Dr. Sam Lomonaco.
Analysis of Algorithms
Chapter 3 Sec 3.3 With Question/Answer Animations 1.
1 COMP3040 Tutorial 1 Analysis of algorithms. 2 Outline Motivation Analysis of algorithms Examples Practice questions.
Fall 2002CMSC Discrete Structures1 Enough Mathematical Appetizers! Let us look at something more interesting: Algorithms.
MCA-2012Data Structure1 Algorithms Rizwan Rehman CCS, DU.
Algorithm Analysis (Algorithm Complexity). Correctness is Not Enough It isn’t sufficient that our algorithms perform the required tasks. We want them.
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
CompSci 102 Discrete Math for Computer Science
Nattee Niparnan Dept. of Computer Engineering, Chulalongkorn University.
Data Structure Introduction.
Chapter Algorithms 3.2 The Growth of Functions 3.3 Complexity of Algorithms 3.4 The Integers and Division 3.5 Primes and Greatest Common Divisors.
3.3 Complexity of Algorithms
Time Complexity of Algorithms (Asymptotic Notations)
Counting Discrete Mathematics. Basic Counting Principles Counting problems are of the following kind: “How many different 8-letter passwords are there?”
Analysis of Algorithm. Why Analysis? We need to know the “behavior” of algorithms – How much resource (time/space) does it use So that we know when to.
The Fundamentals. Algorithms What is an algorithm? An algorithm is “a finite set of precise instructions for performing a computation or for solving.
Asymptotic Notations By Er. Devdutt Baresary. Introduction In mathematics, computer science, and related fields, big O notation describes the limiting.
1 Discrete Structures – CNS2300 Text Discrete Mathematics and Its Applications Kenneth H. Rosen (5 th Edition) Chapter 2 The Fundamentals: Algorithms,
13 February 2016 Asymptotic Notation, Review of Functions & Summations.
Chapter 3. Chapter Summary Algorithms Example Algorithms Growth of Functions Big-O and other Notation Complexity of Algorithms.
Chapter 3 Chapter Summary  Algorithms o Example Algorithms searching for an element in a list sorting a list so its elements are in some prescribed.
Discrete Mathematics Chapter 2 The Fundamentals : Algorithms, the Integers, and Matrices. 大葉大學 資訊工程系 黃鈴玲.
Applied Discrete Mathematics Week 2: Functions and Sequences
The Growth of Functions
Enough Mathematical Appetizers!
Computation.
CS 2210 Discrete Structures Algorithms and Complexity
Analysis Algorithms.
Enough Mathematical Appetizers!
Applied Discrete Mathematics Week 6: Computation
Counting Discrete Mathematics.
Applied Discrete Mathematics Week 7: Computation
Enough Mathematical Appetizers!
Enough Mathematical Appetizers!
Discrete Mathematics 7th edition, 2009
Discrete Mathematics CS 2610
CS 2210 Discrete Structures Algorithms and Complexity
Presentation transcript:

Ch03-Algorithms 1

Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a problem. This is a rather vague definition. You will get to know a more precise and mathematically useful definition when you attend higher level courses. But this one is good enough for now… 2

Algorithms Properties of algorithms: Input: un algorithm has input values from a specified setInput: un algorithm has input values from a specified set Output: from each set of input values an algorithm produces output values from a specified set. The output values are the solution to the problem.Output: from each set of input values an algorithm produces output values from a specified set. The output values are the solution to the problem. Definiteness: the steps of an algorithm must be defined precisely.Definiteness: the steps of an algorithm must be defined precisely. Correctness: an algorithm should produce the correct output values for each set of input values.Correctness: an algorithm should produce the correct output values for each set of input values. 3

Fall 2016CPCS 222- Discrete Structures4 Properties of algorithms: Finiteness: an algorithm should produce the desired output after a finite (but perhaps large) number of steps for any input in the set.Finiteness: an algorithm should produce the desired output after a finite (but perhaps large) number of steps for any input in the set. Effectiveness: It must be possible to perform each step of an algorithm exactly and in a finite amount of time.Effectiveness: It must be possible to perform each step of an algorithm exactly and in a finite amount of time. Generality: The procedure should be applicable for all problems of the desired form, not just for a particular set of input values. Generality: The procedure should be applicable for all problems of the desired form, not just for a particular set of input values. Algorithms

Algorithm Examples We will use a pseudocode to specify algorithms, which slightly reminds us of Basic and Pascal. Example: an algorithm that finds the maximum element in a finite sequence procedure max(a 1, a 2, …, a n : integers) max := a 1 for i := 2 to n if max < a i then max := a i {max is the largest element} 5

Algorithm Examples Another example: a linear search algorithm, that is, an algorithm that linearly searches a sequence for a particular element. procedure linear_search (x: integer; a 1, a 2, …, a n : integers) i := 1 while (i  n and x  a i ) i := i + 1 if i  n then location := i else location := 0 {location is the subscript of the term that equals x, or is zero if x is not found} 6

Algorithm Examples If the terms in a sequence are ordered, a binary search algorithm is more efficient than linear search. The binary search algorithm iteratively restricts the relevant search interval until it closes in on the position of the element to be located. 7

Algorithm Examples 8 a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval

Algorithm Examples 9 a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval

Algorithm Examples 10 a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval

Algorithm Examples 11 a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval

Algorithm Examples 12 a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval found !

Algorithm Examples procedure binary_search(x: integer; a 1, a 2, …, a n : integers) i := 1 {i is left endpoint of search interval} j := n {j is right endpoint of search interval} while (i < j) begin m :=  (i + j)/2  if x > a m then i := m + 1 else j := m end if x = a i then location := i else location := 0 {location is the subscript of the term that equals x, or is zero if x is not found} 13

Algorithm Examples Obviously, on sorted sequences, binary search is more efficient than linear search. How can we analyze the efficiency of algorithms? We can measure the time (number of elementary computations) and time (number of elementary computations) and space (number of memory cells) that the algorithm requires. space (number of memory cells) that the algorithm requires. These measures are called computational complexity and space complexity, respectively. 14

Complexity What is the time complexity of the linear search algorithm? We will determine the worst-case number of comparisons as a function of the number n of terms in the sequence. The worst case for the linear algorithm occurs when the element to be located is not included in the sequence. In that case, every item in the sequence is compared to the element to be located. 15

Algorithm Examples Here is the linear search algorithm again: procedure linear_search(x: integer; a 1, a 2, …, a n : integers) i := 1 while (i  n and x  a i ) i := i + 1 if i  n then location := i else location := 0 {location is the subscript of the term that equals x, or is zero if x is not found} 16

Complexity For n elements, the loop while (i  n and x  a i ) i := i + 1 is processed n times, requiring 2n comparisons. When it is entered for the (n+1)th time, only the comparison i  n is executed and terminates the loop. Finally, the comparison if i  n then location := i is executed, so all in all we have a worst-case time complexity of 2n

Reminder: Binary Search Algorithm procedure binary_search(x: integer; a 1, a 2, …, a n : integers) i := 1 {i is left endpoint of search interval} j := n {j is right endpoint of search interval} while (i < j) begin m :=  (i + j)/2  if x > a m then i := m + 1 else j := m end if x = a i then location := i else location := 0 {location is the subscript of the term that equals x, or is zero if x is not found} 18

Complexity What is the time complexity of the binary search algorithm? Again, we will determine the worst-case number of comparisons as a function of the number n of terms in the sequence. Let us assume there are n = 2 k elements in the list, which means that k = log n. If n is not a power of 2, it can be considered part of a larger list, where 2 k < n < 2 k+1. 19

Complexity In the first cycle of the loop while (i < j) begin m :=  (i + j)/2  if x > a m then i := m + 1 else j := m end the search interval is restricted to 2 k-1 elements, using two comparisons. 20

Complexity In the second cycle, the search interval is restricted to 2 k-2 elements, again using two comparisons. This is repeated until there is only one (2 0 ) element left in the search interval. At this point 2k comparisons have been conducted. 21

Complexity Then, the comparison while (i < j) exits the loop, and a final comparison if x = a i then location := i determines whether the element was found. Therefore, the overall time complexity of the binary search algorithm is 2k + 2 = 2  log n 

Complexity In general, we are not so much interested in the time and space complexity for small inputs. For example, while the difference in time complexity between linear and binary search is meaningless for a sequence with n = 10, it is gigantic for n =

Complexity For example, let us assume two algorithms A and B that solve the same class of problems. The time complexity of A is 5,000n, the one for B is  1.1 n  for an input with n elements. For n = 10, A requires 50,000 steps, but B only 3, so B seems to be superior to A. For n = 1000, however, A requires 5,000,000 steps, while B requires 2.5  steps. 24

Complexity Comparison: time complexity of algorithms A and B 25 Algorithm A Algorithm B Input Size n ,000 1,000,000 5,000n 50, ,000 5,000,000 5  10 9  1.1 n   , 

Complexity This means that algorithm B cannot be used for large inputs, while running algorithm A is still feasible. So what is important is the growth of the complexity functions. The growth of time and space complexity with increasing input size n is a suitable measure for the comparison of algorithms. 26

The Growth of Functions The growth of functions is usually described using the big-O notation. Definition: Let f and g be functions from the integers or the real numbers to the real numbers. We say that f(x) is O(g(x)) if there are constants C and n 0 such that |f(x)|  C|g(x)| whenever x > n 0. [This is read as “f (x) is big-oh of g(x).”] 27

O-notation O(g(n)) = {f(n) :  positive constants c and n 0, such that  n  n 0, we have 0  f(n)  cg(n) } For function g(n), we define O(g(n)), big-O of n, as the set: g(n) is an asymptotic upper bound for f(n). f(n) =  (g(n))  f(n) = O (g(n)).  (g(n))  O(g(n)). Any linear function an + b is in O(n 2 )

The Growth of Functions When we analyze the growth of complexity functions, f(x) and g(x) are always positive. Therefore, we can simplify the big-O requirement to f(x)  C  g(x) whenever x > n 0. If we want to show that f(x) is O(g(x)), we only need to find one pair (C, n 0 ) (which is never unique). 29

The Growth of Functions The idea behind the big-O notation is to establish an upper boundary for the growth of a function f(x) for large x. This boundary is specified by a function g(x) that is usually much simpler than f(x). We accept the constant C in the requirement f(x)  C  g(x) whenever x > n 0, because C does not grow with x. We are only interested in large x, so it is OK if f(x) > C  g(x) for x  n 0. 30

The Growth of Functions Example: Show that f(x) = x 2 + 2x + 1 is O(x 2 ). For x > 1 we have: x 2 + 2x + 1  x 2 + 2x 2 + x 2  x 2 + 2x + 1  4x 2 Therefore, for C = 4 and n 0 = 1: f(x)  Cx 2 whenever x > n 0.  f(x) is O(x 2 ). 31

The Growth of Functions Question: If f(x) is O(x 2 ), is it also O(x 3 )? Yes. x 3 grows faster than x 2, so x 3 grows also faster than f(x). Therefore, we always have to find the smallest simple function g(x) for which f(x) is O(g(x)). 32

The Growth of Functions “Popular” functions g(n) are n log n, 1, 2 n, n 2, n!, n, n 3, log n Listed from slowest to fastest growth: 1 1 log n log n n n n log n n log n n 2 n 2 n 3 n 3 2 n 2 n n! n! 33

The Growth of Functions A problem that can be solved with polynomial worst-case complexity is called tractable. Problems of higher complexity are called intractable. Problems that no algorithm can solve are called unsolvable. You will find out more about this in Discrete Structures 2. 34

Useful Rules for Big-O For any polynomial f(x) = a n x n + a n-1 x n-1 + … + a 0, where a 0, a 1, …, a n are real numbers, f(x) is O(x n ). If f 1 (x) is O(g(x)) and f 2 (x) is O(g(x)), then (f 1 + f 2 )(x) is O(g(x)). If f 1 (x) is O(g 1 (x)) and f 2 (x) is O(g 2 (x)), then (f 1 + f 2 )(x) is O(max(g 1 (x), g 2 (x))) If f 1 (x) is O(g 1 (x)) and f 2 (x) is O(g 2 (x)), then (f 1 f 2 )(x) is O(g 1 (x) g 2 (x)). 35

Complexity Examples What does the following algorithm compute? procedure who_knows(a 1, a 2, …, a n : integers) who_knows := 0 for i := 1 to n-1 for j := i+1 to n for j := i+1 to n if |a i – a j | > who_knows then who_knows := |a i – a j | if |a i – a j | > who_knows then who_knows := |a i – a j | {who_knows is the maximum difference between any two numbers in the input sequence} Comparisons: n-1 + n-2 + n-3 + … + 1 = (n – 1)n/2 = 0.5n 2 – 0.5n = (n – 1)n/2 = 0.5n 2 – 0.5n Time complexity is O(n 2 ). 36

Complexity Examples Another algorithm solving the same problem: procedure max_diff(a 1, a 2, …, a n : integers) min := a 1 max := a 1 for i := 2 to n if a i < min then min := a i else if a i > max then max := a i max_diff := max - min Comparisons (worst case): 2n - 2 Time complexity is O(n). 37

Big-Omega and Big-Theta Notation  Big-O notation is used extensively to describe the growth of functions, but it has limitations. In particular, when f (x) is O(g(x)), we have an upper bound, in terms of g(x), for the size of f (x) for large values of x. However, big-O notation does not provide a lower bound for the size of f(x) for large x.  For this, we use big-Omega (big-ῼ) notation. When we want to give both an upper and a lower bound on the size of a function (x), relative to a reference function g(x),we use big-Theta (big- ῼ) notation 38

 -notation  (g(n)) = {f(n) :  positive constants c 1, c 2, and n 0, such that  n  n 0, we have 0  c 1 g(n)  f(n)  c 2 g(n) } For function g(n), we define  (g(n)), big-Theta of n, as the set of all functions that have the same rate of growth as g(n). g(n) is an asymptotically tight bound for f(n). f(n) =  (g(n)). f(n) and g(n) are nonnegative, for large n. 10n 2 - 3n =  (n 2 )

Relations Between , O,  Theorem : For any two functions g(n) and f(n), f(n) =  (g(n)) iff f(n) = O(g(n)) and f(n) =  (g(n)).  (g(n)) = O(g(n))   (g(n)): asymptotically tight bounds obtained from asymptotic upper and lower bounds.

Complexity

Decision vs. Optimization Problems

Complexity Classes P and NP