Week 12 - Monday.  What did we talk about last time?  Trees  Graphing functions.

Slides:



Advertisements
Similar presentations
MATH 224 – Discrete Mathematics
Advertisements

 The running time of an algorithm as input size approaches infinity is called the asymptotic running time  We study different notations for asymptotic.
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
The Growth of Functions
Asymptotic Growth Rate
Cutler/HeadGrowth of Functions 1 Asymptotic Growth Rate.
Introduction to Analysis of Algorithms
Complexity Analysis (Part I)
Analysis of Algorithms (Chapter 4)
Algorithm Design Techniques: Induction Chapter 5 (Except Section 5.6)
1 Data Structures A program solves a problem. A program solves a problem. A solution consists of: A solution consists of:  a way to organize the data.
Algorithm Analysis CS 201 Fundamental Structures of Computer Science.
Lecture 3 Feb 7, 2011 Goals: Chapter 2 (algorithm analysis) Examples: Selection sorting rules for algorithm analysis Image representation Image processing.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
1 i206: Lecture 6: Math Review, Begin Analysis of Algorithms Marti Hearst Spring 2012.
Chapter 5 Algorithm Analysis 1CSCI 3333 Data Structures.
Algorithm Analysis. Algorithm Def An algorithm is a step-by-step procedure.
Time Complexity Dr. Jicheng Fu Department of Computer Science University of Central Oklahoma.
Minimal Spanning Trees What is a minimal spanning tree (MST) and how to find one.
Analysis of Algorithm Lecture 3 Recurrence, control structure and few examples (Part 1) Huma Ayub (Assistant Professor) Department of Software Engineering.
Chapter 9 – Graphs A graph G=(V,E) – vertices and edges
Chapter 2.6 Comparison of Algorithms modified from Clifford A. Shaffer and George Bebis.
Week 2 CS 361: Advanced Data Structures and Algorithms
Discrete Structures Lecture 11: Algorithms Miss, Yanyan,Ji United International College Thanks to Professor Michael Hvidsten.
Lecture 2 Computational Complexity
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Mathematics Review and Asymptotic Notation
DISCRETE MATHEMATICS I CHAPTER 11 Dr. Adam Anthony Spring 2011 Some material adapted from lecture notes provided by Dr. Chungsim Han and Dr. Sam Lomonaco.
Algorithm Analysis An algorithm is a clearly specified set of simple instructions to be followed to solve a problem. Three questions for algorithm analysis.
Design and Analysis of Algorithms - Chapter 21 Analysis of Algorithms b Issues: CorrectnessCorrectness Time efficiencyTime efficiency Space efficiencySpace.
Analysis of Algorithms
Week 12 - Friday.  What did we talk about last time?  Asymptotic notation.
Week 11 - Wednesday.  What did we talk about last time?  Graphs  Euler paths and tours.
Analyzing algorithms & Asymptotic Notation BIO/CS 471 – Algorithms for Bioinformatics.
Week 12 - Wednesday.  What did we talk about last time?  Asymptotic notation.
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
O, , and  Notations Lecture 39 Section 9.2 Wed, Mar 30, 2005.
Week 11 - Friday.  What did we talk about last time?  Graph isomorphism  Trees.
Algorithm Analysis (Algorithm Complexity). Correctness is Not Enough It isn’t sufficient that our algorithms perform the required tasks. We want them.
Algorithm Analysis CS 400/600 – Data Structures. Algorithm Analysis2 Abstract Data Types Abstract Data Type (ADT): a definition for a data type solely.
Computer Science and Software Engineering University of Wisconsin - Platteville 8. Comparison of Algorithms Yan Shi CS/SE 2630 Lecture Notes Part of this.
Chapter 18: Searching and Sorting Algorithms. Objectives In this chapter, you will: Learn the various search algorithms Implement sequential and binary.
O -Notation April 23, 2003 Prepared by Doug Hogan CSE 260.
Data Structures and Algorithm Analysis Introduction Lecturer: Ligang Dong, egan Tel: , Office: SIEE Building.
Chapter 5 Algorithms (2) Introduction to CS 1 st Semester, 2015 Sanghyun Park.
Java Methods Big-O Analysis of Algorithms Object-Oriented Programming
Chapter 9 Efficiency of Algorithms. 9.3 Efficiency of Algorithms.
Algorithm Analysis Part of slides are borrowed from UST.
Analysis of algorithms. What are we going to learn? Need to say that some algorithms are “better” than others Criteria for evaluation Structure of programs.
ADVANCED ALGORITHMS REVIEW OF ANALYSIS TECHNIQUES (UNIT-1)
Minimum- Spanning Trees
Algorithm Analysis. What is an algorithm ? A clearly specifiable set of instructions –to solve a problem Given a problem –decide that the algorithm is.
1 Chapter 2 Algorithm Analysis All sections. 2 Complexity Analysis Measures efficiency (time and memory) of algorithms and programs –Can be used for the.
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
Week 11 - Friday.  What did we talk about last time?  Paths and circuits  Matrix representation of graphs.
Ch03-Algorithms 1. Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a.
Chapter 3 Chapter Summary  Algorithms o Example Algorithms searching for an element in a list sorting a list so its elements are in some prescribed.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
Algorithmic Foundations COMP108 COMP108 Algorithmic Foundations Algorithm efficiency Prudence Wong.
CSE 3358 NOTE SET 2 Data Structures and Algorithms 1.
Algorithmic Foundations COMP108 COMP108 Algorithmic Foundations Algorithm efficiency Prudence Wong
Algorithm Analysis 1.
Chapter 2 Algorithm Analysis
Theoretical analysis of time efficiency
Analysis of Algorithms
Enough Mathematical Appetizers!
Applied Discrete Mathematics Week 6: Computation
Fundamentals of the Analysis of Algorithm Efficiency
Presentation transcript:

Week 12 - Monday

 What did we talk about last time?  Trees  Graphing functions

 Two vegetarians and two cannibals are on one bank of a river  They have a boat that can hold at most two people  Come up with a sequence of boat loads that will convey all four people safely to the other side of the river  The cannibals on any given bank cannot outnumber the vegetarians…. or else!

Which we somehow seemed to skip earlier…

 Consider the following graph that shows all the routes an airline has between cities Minneapolis Milwaukee Chicago St. Louis Detroit Cincinnati Louisville Nashville

 What if we want to remove routes (to save money)?  How can we keep all cities connected? Minneapolis Milwaukee Chicago St. Louis Detroit Cincinnati Louisville Nashville

 Does this tree have the smallest number of routes?  Why? Minneapolis Milwaukee Chicago St. Louis Detroit Cincinnati Louisville Nashville

 A spanning tree for a graph G is a subgraph of G that contains every vertex of G and is a tree  Some properties:  Every connected graph has a spanning tree ▪ Why?  Any two spanning trees for a graph have the same number of edges ▪ Why?

 In computer science, we often talk about weighted graphs when tackling practical applications  A weighted graph is a graph for which each edge has a real number weight  The sum of the weights of all the edges is the total weight of the graph  Notation: If e is an edge in graph G, then w(e) is the weight of e and w(G) is the total weight of G  A minimum spanning tree (MST) is a spanning tree of lowest possible total weight

 Here is the graph from before, with labeled weights Minneapolis Milwaukee Chicago St. Louis Detroit Cincinnati Louisville Nashville

 Kruskal's algorithm gives an easy to follow technique for finding an MST on a weighted, connected graph  Informally, go through the edges, adding the smallest one, unless it forms a circuit  Algorithm:  Input: Graph G with n vertices  Create a subgraph T with all the vertices of G (but no edges)  Let E be the set of all edges in G  Set m = 0  While m < n – 1 ▪ Find an edge e in E of least weight ▪ Delete e from E ▪ If adding e to T doesn't make a circuit ▪ Add e to T ▪ Set m = m + 1  Output: T

 Run Kruskal's algorithm on the city graph: Minneapolis Milwaukee Chicago St. Louis Detroit Cincinnati Louisville Nashville

 Output: Minneapolis Milwaukee Chicago St. Louis Detroit Cincinnati Louisville Nashville

 Prim's algorithm gives another way to find an MST  Informally, start at a vertex and add the next closest node not already in the MST  Algorithm:  Input: Graph G with n vertices  Let subgraph T contain a single vertex v from G  Let V be the set of all vertices in G except for v  For i from 1 to n – 1 ▪ Find an edge e in G such that: ▪ e connects T to one of the vertices in V ▪ e has the lowest weight of all such edges ▪ Let w be the endpoint of e in V ▪ Add e and w to T ▪ Delete w from V  Output: T

 Apply Kruskal's algorithm to the graph below  Now, apply Prim's algorithm to the graph below  Is there any other MST we could make?

 Recall the definition of the floor of x:   x  = the largest integer that is less than or equal to x  Graph f(x) =  x   Defining functions on integers instead of real values affects their graphs a great deal  Graph p 1 (x) = x, x  R  Graph f(n) = n, n  N

 There is a strong visual (and of course mathematical) correlation a function that is the multiple of another function  Examples:  g(x) = x + 2  2g(x) = 2x + 4  Given f graphed below, sketch 2f

 Consider the absolute value function  f(x) = |x|  Left of the origin it is constantly decreasing  Right of the origin it is constantly increasing

 We say that f is decreasing on the set S iff for all real numbers x 1 and x 2 in S, if x 1 f(x 2 )  We say that f is increasing on the set S iff for all real numbers x 1 and x 2 in S, if x 1 < x 2, then f(x 1 ) < f(x 2 )  We say that f is an increasing (or decreasing) function iff f is increasing (or decreasing) on its entire domain  Clearly, a positive multiple of an increasing function is increasing  Virtually all running time functions are increasing functions

Student Lecture

 Mathematicians worry about the growth of various functions  They usually express such things in terms of limits, maybe with derivatives  We are focused primarily on functions that bound running time spent and memory consumed  We just need a rough guide  We want to know the order of the growth

 Let f and g be real-valued functions defined on the same set of nonnegative real numbers  f is of order at least g, written f(x) is  (g(x)), iff there is a positive A  R and a nonnegative a  R such that  A|g(x)| ≤ |f(x)| for all x > a  f is of order at most g, written f(x) is O(g(x)), iff there is a positive B  R and a nonnegative b  R such that  |f(x)| ≤ B|g(x)| for all x > b  f is of order g, written f(x) is  (g(x)), iff there are positive A, B  R and a nonnegative k  R such that  A|g(x)| ≤ |f(x)| ≤ B|g(x)| for all x > k

 Express the following statements using appropriate notation:  10|x 6 | ≤ |17x 6 – 45x 3 + 2x + 8| ≤ 30|x 6 |, for x > 2    Justify the following:  is  (  x)

 Let f and g be real-valued functions defined on the same set of nonnegative real numbers 1. f(x) is  (g(x)) and f(x) is O(g(x)) iff f(x) is  (g(x)) 2. f(x) is  (g(x)) iff g(x) is O(f(x)) 3. f(x) is  (f(x)), f(x) is O(f(x)), and f(x) is  (f(x)) 4. If f(x) is O(g(x)) and g(x) is O(h(x)) then f(x) is O(h(x)) 5. If f(x) is O(g(x)) and c is a positive real, then c  f(x) is O(g(x)) 6. If f(x) is O(h(x)) and g(x) is O(k(x)) then f(x) + g(x) is O(G(x)) where G(x) = max(|h(x)|,|k(x)|) for all x 7. If f(x) is O(h(x)) and g(x) is O(k(x)) then f(x)g(x) is O(h(x)k(x))

 If 1 < x, then  x < x 2  x 2 < x 3 ……  So, for r, s  R, where r 1,  x r < x s  By extension, x r is O(x s )

 Prove a  bound for g(x) = (1/4)(x – 1)(x + 1) for x  R  Prove that x 2 is not O(x)  Hint: Proof by contradiction

 Let f(x) be a polynomial with degree n  f(x) = a n x n + a n-1 x n-1 + a n-2 x n-2 … + a 1 x + a 0  By extension from the previous results, if a n is a positive real, then  f(x) is O(x s ) for all integers s  n  f(x) is  (x r ) for all integers r ≤ n  f(x) is  (x n )  Furthermore, let g(x) be a polynomial with degree m  g(x) = b m x m + b m-1 x m-1 + b m-2 x m-2 … + b 1 x + b 0  If a n and b m are positive reals, then  f(x)/g(x) is O(x c ) for real numbers c > n - m  f(x)/g(x) is not O(x c ) for all integers c > n - m  f(x)/g(x) is  (x n - m )

 We can easily extend our  -, O-, and  - notations to analyzing the running time of algorithms  Imagine that an algorithm A is composed of some number of elementary operations (usually arithmetic, storing variables, etc.)  We can imagine that the running time is tied purely to the number of operations  This is, of course, a lie  Not all operations take the same amount of time  Even the same operation takes different amounts of time depending on caching, disk access, etc.

 First, assume that the number of operations performed by A on input size n is dependent only on n, not the values of the data  If f(n) is  (g(n)), we say that A is  (g(n)) or that A is of order g(n)  If the number of operations depends not only on n but also on the values of the data  Let b(n) be the minimum number of operations where b(n) is  (g(n)), then we say that in the best case, A is  (g(n)) or that A has a best case order of g(n)  Let w(n) be the maximum number of operations where w(n) is  (g(n)), then we say that in the worst case, A is  (g(n)) or that A has a worst case order of g(n)

Approximate Time for f(n) Operations Assuming One Operation Per Nanosecond f(n)f(n)n = 10n = 1,000n = 100,000n = 10,000,000 log 2 n3.3 x s10 -8 s1.7 x s2.3 x s n10 -8 s10 -6 s s0.01 s n log 2 n3.3 x s10 -5 s s0.23 s n2n s0.001 s10 s27.8 hours n3n s1 s11.6 days31,668 years 2n2n s3.4 x years3.1 x years2.9 x years

 With a single for (or other) loop, we simply count the number of operations that must be performed: int p = 0; int x = 2; for( int i = 2; i <= n; i++ ) p = (p + i)*x;  Counting multiplies and adds, (n – 1) iterations times 2 operations = 2n – 2  As a polynomial, 2n – 2 is  (n)

 When loops do not depend on each other, we can simply multiply their iterations (and asymptotic bounds) int p = 0; for( int i = 2; i <= n; i++ ) for( int j = 3; j <= n; j++ ) p++;  Clearly (n – 1)(n -2) is  (n 2 )

 When loops depend on each other, we have to do more analysis int s = 0; for( int i = 1; i <= n; i++ ) for( int j = 1; j <= i; j++ ) s = s + j*(i – j + 1);  What's the running time here?  Arithmetic sequence saves the day (for the millionth time)

 When loops depend on floor, what happens to the running time? int a = 0; for( int i = n/2; i <= n; i++ ) a = n - i;  Floor is used implicitly here, because we are using integer division  What's the running time? Hint: Consider n as odd or as even separately

 Consider a basic sequential search algorithm: int search( int[] array, int n, int value) { for( int i = 0; i < n; i++ ) if( array[i] == value ) return i; return -1; }  What's its best case running time?  What's its worst case running time?  What's its average case running time?

 Insertion sort is a common introductory sort  It is suboptimal, but it is one of the fastest ways to sort a small list (10 elements or fewer)  The idea is to sort initial segments of an array, insert new elements in the right place as they are found  So, for each new item, keep moving it up until the element above it is too small (or we hit the top)

public static void sort( int[] array, int n) { for( int i = 1; i < n; i++ ) { int next = array[i]; int j = i - 1; while( j != 0 && array[j] > next ) { array[j+1] = array[j]; j--; } array[j] = next; } }

 What is the best case analysis of insertion sort?  Hint: Imagine the array is already sorted

 What is the worst case analysis of insertion sort?  Hint: Imagine the array is sorted in reverse order

 What is the average case analysis of insertion sort?  Much harder than the previous two!  Let's look at it recursively  Let E k be the average number of comparisons needed to sort k elements  E k can be computed as the sum of the average number of comparisons needed to sort k – 1 elements plus the average number of comparisons (x) needed to insert the k th element in the right place  E k = E k-1 + x

 We can employ the idea of expected value from probability  There are k possible locations for the element to go  We assume that any of these k locations is equally likely  For each turn of the loop, there are 2 comparisons to do  There could be 1, 2, 3, … up to k turns of the loop  Thus, weighting each possible number of iterations evenly gives us

 Having found x, our recurrence relation is:  E k = E k-1 + k + 1  Sorting one element takes no time, so E 1 = 0  Solve this recurrence relation!  Well, if you really banged away at it, you might find:  E n = (1/2)(n 2 + 3n – 4)  By the polynomial rules, this is  (n 2 ) and so the average case running time is the same as the worst case

 Finish algorithmic efficiency  Exponential and logarithmic functions

 Keep reading Chapter 11  Keep working on Assignment 9  Due Friday before midnight  Exam 3 is next Monday  Review on Friday