 O(1) – constant time  The time is independent of n  O(log n) – logarithmic time  Usually the log is to the base 2  O(n) – linear time  O(n*logn)

Slides:



Advertisements
Similar presentations
Chapter 20 Computational complexity. This chapter discusses n Algorithmic efficiency n A commonly used measure: computational complexity n The effects.
Advertisements

Growth-rate Functions
Towers of Hanoi Move n (4) disks from pole A to pole C such that a disk is never put on a smaller disk A BC ABC.
CSE Lecture 3 – Algorithms I
Algorithms Analysis Lecture 6 Quicksort. Quick Sort Divide and Conquer.
Algorithm Analysis.
Recursion.  Understand how the Fibonacci series is generated  Recursive Algorithms  Write simple recursive algorithms  Analyze simple recursive algorithms.
11.2 Complexity Analysis. Complexity Analysis As true computer scientists, we need a way to compare the efficiency of algorithms. Should we just use a.
Searching Kruse and Ryba Ch and 9.6. Problem: Search We are given a list of records. Each record has an associated key. Give efficient algorithm.
The Substitution method T(n) = 2T(n/2) + cn Guess:T(n) = O(n log n) Proof by Mathematical Induction: Prove that T(n)  d n log n for d>0 T(n)  2(d  n/2.
25 May Quick Sort (11.2) CSE 2011 Winter 2011.
the fourth iteration of this loop is shown here
CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis Tyler Robison Summer
Proof Techniques and Recursion. Proof Techniques Proof by induction –Step 1: Prove the base case –Step 2: Inductive hypothesis, assume theorem is true.
Quicksort. Quicksort I To sort a[left...right] : 1. if left < right: 1.1. Partition a[left...right] such that: all a[left...p-1] are less than a[p], and.
CSE 326: Data Structures Lecture #3 Analysis of Recursive Algorithms Alon Halevy Fall Quarter 2000.
Algorithm Analysis CS 201 Fundamental Structures of Computer Science.
Quicksort
Data Structures Review Session 1
CMPT 225 Recursion-part 3. Recursive Searching Linear Search Binary Search Find an element in an array, return its position (index) if found, or -1 if.
 Last lesson  Arrays for implementing collection classes  Performance analysis (review)  Today  Performance analysis  Logarithm.
CSC 2300 Data Structures & Algorithms January 30, 2007 Chapter 2. Algorithm Analysis.
Liang, Introduction to Java Programming, Eighth Edition, (c) 2011 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Main Index Contents 11 Main Index Contents Selection Sort Selection SortSelection Sort Selection Sort (3 slides) Selection Sort Alg. Selection Sort Alg.Selection.
Data Structure Algorithm Analysis TA: Abbas Sarraf
Algorithm Analysis: Big O Notation
Analysis of Algorithm.
Recursion.  Identify recursive algorithms  Write simple recursive algorithms  Understand recursive function calling  With reference to the call stack.
CHAPTER 7: SORTING & SEARCHING Introduction to Computer Science Using Ruby (c) Ophir Frieder at al 2012.
CSE373: Data Structures and Algorithms Lecture 4: Asymptotic Analysis Aaron Bauer Winter 2014.
Liang, Introduction to Java Programming, Seventh Edition, (c) 2009 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Algorithm Analysis & Complexity We saw that a linear search used n comparisons in the worst case (for an array of size n) and binary search had logn comparisons.
Analysis of Algorithm Lecture 3 Recurrence, control structure and few examples (Part 1) Huma Ayub (Assistant Professor) Department of Software Engineering.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
1 Recursion Algorithm Analysis Standard Algorithms Chapter 7.
Recitation 11 Analysis of Algorithms and inductive proofs 1.
Analysis of Algorithms
Searching. RHS – SOC 2 Searching A magic trick: –Let a person secretly choose a random number between 1 and 1000 –Announce that you can guess the number.
C++ Programming: From Problem Analysis to Program Design, Second Edition Chapter 19: Searching and Sorting.
Order Statistics. Order statistics Given an input of n values and an integer i, we wish to find the i’th largest value. There are i-1 elements smaller.
Big Oh Algorithms are compared to each other by expressing their efficiency in big-oh notation Big O notation is used in Computer Science to describe the.
CSC 211 Data Structures Lecture 13
Data Structure Introduction.
CMPT 225 Recursion. Objectives Understand how the Fibonacci series is generated Recursive Algorithms  Write simple recursive algorithms  Analyze simple.
Big Java by Cay Horstmann Copyright © 2009 by John Wiley & Sons. All rights reserved. Selection Sort Sorts an array by repeatedly finding the smallest.
Recitation on analysis of algorithms. Formal definition of O(n) We give a formal definition and show how it is used: f(n) is O(g(n)) iff There is a positive.
CompSci 100E 15.1 What is a Binary Search?  Magic!  Has been used a the basis for “magical” tricks  Find telephone number ( without computer ) in seconds.
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
1 Recursive algorithms Recursive solution: solve a smaller version of the problem and combine the smaller solutions. Example: to find the largest element.
PREVIOUS SORTING ALGORITHMS  BUBBLE SORT –Time Complexity: O(n 2 ) For each item, make (n –1) comparisons Gives: Comparisons = (n –1) + (n – 2)
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
329 3/30/98 CSE 143 Searching and Sorting [Sections 12.4, ]
Prof. Amr Goneid, AUC1 CSCE 210 Data Structures and Algorithms Prof. Amr Goneid AUC Part 3. Introduction to the Analysis of Algorithms.
BINARY SEARCH CS16: Introduction to Data Structures & Algorithms Thursday February 12,
Section 1.7 Comparing Algorithms: Big-O Analysis.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
Algorithm Analysis 1.
Analysis of Algorithms
Algorithm Analysis: Big O Notation
Towers of Hanoi Move n (4) disks from pole A to pole C
COMP 53 – Week Seven Big O Sorting.
More Computational Theory
Searching & Sorting "There's nothing hidden in your head the sorting hat can't see. So try me on and I will tell you where you ought to be." -The Sorting.
Big-Oh and Execution Time: A Review
Algorithm design and Analysis
Data Structures Review Session
CS 201 Fundamental Structures of Computer Science
IST311 - CIS265/506 Cleveland State University – Prof. Victor Matos
At the end of this session, learner will be able to:
Sum this up for me Let’s write a method to calculate the sum from 1 to some n public static int sum1(int n) { int sum = 0; for (int i = 1; i
Presentation transcript:

 O(1) – constant time  The time is independent of n  O(log n) – logarithmic time  Usually the log is to the base 2  O(n) – linear time  O(n*logn)  O(n 2 ) – quadratic time  O(n k ) – polynomial ( where k is some constant)  O(2 n ) – exponential time

// PRE: arr is sorted int max(int arr[], int size){ return arr[size-1]; } O(1)

int max(int arr[], int size){ int maximum = arr[0]; for (int i=0; i < size; ++i){ if arr[i] > maximum { maximum = arr[i]; } return maximum; } O(n)

 What is the difference between the two max functions?  The first always looks at the last element of the array ▪ Arrays support random access so the time it takes to retrieve this value is not dependent on the array size  The second contains a for loop  The for loop in the max function iterates n times  The loop control variable starts at 0, goes up by 1 for each loop iteration and the loop ends when it reaches n  If a function contains a for loop is it always O(n)?  Not necessarily

float rough_mean(int arr[], int n){ float sum = 0; for (int i=0; i < n; i+=10){ sum += arr[i]; } return sum / (n / 10.0); } O(n)

bool search(int arr[], int size, int target){ int low = 0; int high = size - 1; int mid = 0; while (low <= high){ mid = (low + high) / 2; if(target == arr[mid]){ return true; } else if(target > arr[mid]){ low = mid + 1; } else { //target < arr[mid] high = mid - 1; } } //while return false; } John Edgar7 O(log(n)) Average and worst case

 It is important to analyze how many times a loop iterates  By considering how the loop control variable changes through each iteration  Be careful to ignore constants  Consider how the running time would change if the input doubled  In an O(n) algorithm the running time will double  In a O(log(n)) algorithm it increases by 1

float mean(int arr[], int size){ float sum = 0; for (int i=0; i < size; ++i){ sum += arr[i]; } return sum / size; } O(n)

int variance(int arr[], int size) { float result = 0; float sqDiff = 0; for (int i=0; i < size; ++i){ sqDiff = arr[i] – mean(arr, size); sqDiff *= sqDiff; result += sqDiff; } return result; } How could this be improved? O(n 2 )

int variance(int arr[], int size) { float result = 0; float avg = mean(arr, size); for (int i=0; i < size; ++i){ sqDiff = arr[i] – avg; sqDiff *= sqDiff; result += sqDiff; } return result; } O(n)

int bubble(int arr[], int size) { bool swapped = true; while(swapped){ swapped = false; for (int i=0; i < size-1; ++i){ if(arr[i] > arr[i+1]){ int temp = arr[i]; arr[i] = arr[i+1]; arr[i+1] = temp; swapped = true; } O(n 2 ) Average and worst case

 Both the (stupid) variance and bubble functions contain nested loops  Both the inner loops perform O(n) iterations ▪ In variance the inner loop is contained in a function  And the outer loops also perform O(n) iterations  The functions are therefore both O(n 2 )  Make sure that you check to see how many times both loops iterate

int foo(int arr[], int size){ int result = 0; int i = 0; while (i < size / 2){ result += arr[i]; i += 1; while (i >= size / 2 && i < size){ result += arr[i]; i += 1; } O(n)

bool alpha(string s){ int end = s.size() - 1; for (int i = 0; i < end; ++i){ if s[i] > s[i+1]{ return false; } return true; } best case - O(1) average case - ? worst case - O(n)

 Best case and worst case analysis are often relatively straightforward  Although they require a solid understanding of the algorithm's behaviour  Average case analysis can be more difficult  It may involve a more complex mathematical analysis of the function's behaviour  But can sometimes be achieved by considering whether it is closer to the worst or best case

int sum(int arr[], int size, int i){ if (i == size – 1){ return arr[i]; } else{ return arr[i] + sum(arr, size, i + 1); } O(n) Assume there is a calling function that calls sum(arr, size, 0)

 The analysis of a recursive function revolves around the number of recursive calls made  And the running time of a single recursive call  In the sum example the amount of a single function call is constant  It is not dependent on the size of the array  One recursive call is made for each element of the array

 One way of analyzing a recursive algorithm is to draw a tree of the recursive calls  Determine the depth of the tree  And the running time of each level of the tree  In Quicksort the partition algorithm is responsible for partitioning sub-arrays  That at any level of the recursion tree make up the entire array when aggregated  Therefore each level of the tree entails O(n) work

n n n/2 n/ n/ n sub-arrays of size 1... At each level the partition process performs roughly n operations, how many levels are there? At each level the sub-array size is half the size of the previous level O(log(n)) levels Multiply the work at each level by number of levels O(n * log(n))

n n n n-2... At each level the partition process performs roughly n operations, how many levels are there? At each level the sub-array size is one less than the size of the previous level O(n) levels Multiply the work at each level by levels O(n 2 )

 The running time of the recursive Fibonacci function we looked at was painfully slow  But just how bad was it?  Let's consider a couple of possible running times ▪ O(n 2 ) ▪ O(2 n )  We will use another tool to reason about the running time  Induction

int fib(int n) if(n == 0 || n == 1) return n else return fib(n-1) + fib(n-2) John Edgar23 fib(5) fib(4) fib(3) fib(2) fib(1) fib(0) fib(2) fib(1) fib(0) fib(1) fib(2) fib(1) fib(0) fib(1) Cleary this is not an efficient algorithm but just how bad is it?

 Let's assume that it is O(n 2 )  Although this isn't supported by the recursion tree  Base case – T(n  1) = O(1)  True, since only 2 operations are performed  Inductive hypothesis – T(n-1) = (n-1) 2  Inductive proof  n 2  (n-1) 2 + (n-2) 2  n 2  (n 2 - 2n + 2) + (n 2 - 4n + 4)  n 2  2n 2 - 6n + 6  2n 2 - 6n + 6 > n 2, so the inductive hypothesis is not proven

 Let's assume that it is O(2 n )  Base case – T(n  1) = O(1)  True, since only 2 operations are performed  Inductive hypothesis – T(n-1) = 2 n-1  Inductive proof  2 n  2 n n-2  Since 2 n = 2 n n-1, 2 n is greater than 2 n n-2  The inductive hypothesis is proven