Introduction to Algorithms Rabie A. Ramadan rabieramadan.org 2 Some of the sides are exported from different sources.

Slides:



Advertisements
Similar presentations
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Advertisements

Chapter 5 Decrease and Conquer. Homework 7 hw7 (due 3/17) hw7 (due 3/17) –page 127 question 5 –page 132 questions 5 and 6 –page 137 questions 5 and 6.
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
Chapter 2: Fundamentals of the Analysis of Algorithm Efficiency
Introduction to Analysis of Algorithms
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Design and Analysis of Algorithms - Chapter 21 Analysis of Algorithms b Issues: CorrectnessCorrectness Time efficiencyTime efficiency Space efficiencySpace.
Liang, Introduction to Java Programming, Sixth Edition, (c) 2007 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Concept of Basic Time Complexity Problem size (Input size) Time complexity analysis.
Liang, Introduction to Java Programming, Eighth Edition, (c) 2011 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
Design and Analysis of Algorithms Chapter Analysis of Algorithms Dr. Ying Lu August 28, 2012
Data Structures Introduction Phil Tayco Slide version 1.0 Jan 26, 2015.
Liang, Introduction to Java Programming, Seventh Edition, (c) 2009 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Time Complexity Dr. Jicheng Fu Department of Computer Science University of Central Oklahoma.
Program Performance & Asymptotic Notations CSE, POSTECH.
Week 2 CS 361: Advanced Data Structures and Algorithms
For Wednesday Read Weiss chapter 3, sections 1-5. This should be largely review. If you’re struggling with the C++ aspects, you may refer to Savitch, chapter.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
MA/CSSE 473 Day 12 Insertion Sort quick review DFS, BFS Topological Sort.
{ CS203 Lecture 7 John Hurley Cal State LA. 2 Execution Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
Lecture 2 Computational Complexity
1 Chapter 1 Analysis Basics. 2 Chapter Outline What is analysis? What to count and consider Mathematical background Rates of growth Tournament method.
1 Time Analysis Analyzing an algorithm = estimating the resources it requires. Time How long will it take to execute? Impossible to find exact value Depends.
Design and Analysis of Algorithms - Chapter 21 Analysis of Algorithms b Issues: CorrectnessCorrectness Time efficiencyTime efficiency Space efficiencySpace.
Analysis of Algorithms
Analysis of Algorithm Efficiency Dr. Yingwu Zhu p5-11, p16-29, p43-53, p93-96.
MA/CSSE 473 Day 15 BFS Topological Sort Combinatorial Object Generation Intro.
Analysis of Algorithms CSCI Previous Evaluations of Programs Correctness – does the algorithm do what it is supposed to do? Generality – does it.
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
Chapter 5 Algorithms (2) Introduction to CS 1 st Semester, 2015 Sanghyun Park.
Algorithmic Analysis Charl du Plessis and Robert Ketteringham.
Introduction to Algorithms Rabie A. Ramadan rabieramadan.org 7.
2-0 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Theoretical.
UNIT-I FUNDAMENTALS OF THE ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS AND DESIGN OF ALGORITHMS CHAPTER 2:
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Chapter.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Searching Topics Sequential Search Binary Search.
Program Performance 황승원 Fall 2010 CSE, POSTECH. Publishing Hwang’s Algorithm Hwang’s took only 0.1 sec for DATASET1 in her PC while Dijkstra’s took 0.2.
Lecture 4 Jianjun Hu Department of Computer Science and Engineerintg University of South Carolina CSCE350 Algorithms and Data Structure.
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
LECTURE 2 : fundamentals of analysis of algorithm efficiency Introduction to design and analysis algorithm 1.
Algorithmic Foundations COMP108 COMP108 Algorithmic Foundations Algorithm efficiency Prudence Wong
Lecture 3COMPSCI.220.S1.T Running Time: Estimation Rules Running time is proportional to the most significant term in T(n) Once a problem size.
Algorithm Complexity is concerned about how fast or slow particular algorithm performs.
19 Searching and Sorting.
Theoretical analysis of time efficiency
Analysis of Algorithms
Analysis of algorithms
Analysis of algorithms
Introduction to Analysis of Algorithms
Design and Analysis of Algorithms Chapter -2
Analysis of Algorithms
Computer Science 112 Fundamentals of Programming II
Introduction to Algorithms
Analysis of algorithms
Algorithms Furqan Majeed.
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
Analysis of algorithms
Objective of This Course
CS 201 Fundamental Structures of Computer Science
Chapter 2.
Fundamentals of the Analysis of Algorithm Efficiency
At the end of this session, learner will be able to:
Analysis of algorithms
Fundamentals of the Analysis of Algorithm Efficiency
Presentation transcript:

Introduction to Algorithms Rabie A. Ramadan rabieramadan.org 2 Some of the sides are exported from different sources to clarify the topic

Importance of algorithms Algorithms are used in every aspect in our life. Let’s take an Example ……….

Example Suppose you are implementing a spreadsheet program, in which you must maintain a grid of cells. Some cells of the spreadsheet contain numbers, but other cells contain expressions that depend on other cells for their value. However, the expressions are not allowed to form a cycle of dependencies: for example, if the expression in cell E1 depends on the value of cell A5, and the expression in cell A5 depends on the value of cell C2, then C2 must not depend on E1.

Example Describe an algorithm for making sure that no cycle of dependencies exists (or finding one and complaining to the spreadsheet user if it does exist). If the spreadsheet changes, all its expressions may need to be recalculated. Describe an efficient method for sorting the expression evaluations, so that each cell is recalculated only after the cells it depends on have been recalculated.

Another Example Order the following items in a food chain fish human shrimp sheep wheatplankton tiger

Solving Topological Sorting Problem Solution: Verify whether a given digraph is a dag and, if it is, produce an ordering of vertices. Two algorithms for solving the problem. They may give different (alternative) solutions. DFS-based algorithm Perform DFS traversal and note the order in which vertices become dead ends (that is, are popped of the traversal stack). Reversing this order yields the desired solution, provided that no back edge has been encountered during the traversal.

Example Complexity: as DFS

Solving Topological Sorting Problem Source removal algorithm Identify a source, which is a vertex with no incoming edges and delete it along with all edges outgoing from it. There must be at least one source to have the problem solved. Repeat this process in a remaining diagraph. The order in which the vertices are deleted yields the desired solution.

Example

Source removal algorithm Efficiency Efficiency: same as efficiency of the DFS-based algorithm, but how would you identify a source? A big Problem

Analysis of algorithms Issues: Correctness space efficiency time efficiency optimality Approaches: theoretical analysis empirical analysis

Space Analysis When considering space complexity, algorithms are divided into those that need extra space to do their work and those that work in place. Space analysis would examine all of the data being stored to see if there were more efficient ways to store it. Example : As a developer, how do you store the real numbers ? Suppose we are storing a real number that has only one place of precision after the decimal point and ranges between -10 and +10. How many bytes you need ?

Space Analysis Example : As a developer, how do you store the real numbers ? Suppose we are storing a real number that has only one place of precision after the decimal point and ranges between -10 and +10. How many bytes you need ? Most computers will use between 4 and 8 bytes of memory. If we first multiply the value by 10. We can then store this as an integer between -100 and This needs only 1 byte, a savings of 3 to 7 bytes. A program that stores 1000 of these values can save 3000 to 7000 bytes. It makes a big difference when programming mobile or PDAs or when you have large input.

Theoretical analysis of time efficiency Time efficiency is analyzed by determining the number of repetitions of the basic operation as a function of input size Basic operation: the operation that contributes the most towards the running time of the algorithm T(n) ≈ c op C(n) running time execution time for basic operation or cost Number of times basic operation is executed input size Note: Different basic operations may cost differently!

Why Input Classes are Important? Input determines what the path of execution through an algorithm will be. If we are interested in finding the largest value in a list of N numbers, we can use the following algorithm:

Why Input Classes are Important? If the list is in decreasing order, There will only be one assignment done before the loop starts. If the list is in increasing order, There will be N assignments (one before the loop starts and N -1 inside the loop). Our analysis must consider more than one possible set of input, because if we only look at one set of input, it may be the set that is solved the fastest (or slowest). Our analysis must consider more than one possible set of input, because if we only look at one set of input, it may be the set that is solved the fastest (or slowest).

Input size and basic operation examples Problem Input size measure Basic operation Searching for key in a list of n items Number of list’s items, i.e. n Key comparison Multiplication of two matrices Matrix dimensions or total number of elements Multiplication of two numbers Checking primality of a given integer n n’size = number of digits (in binary representation) Division Typical graph problem #vertices and/or edges Visiting a vertex or traversing an edge

Importance of the analysis It gives an idea about how fast the algorithm If the number of basic operations C(n) = ½ n (n-1) = ½ n 2 – ½ n ≈ ½ n 2 How much longer if the algorithm doubles its input size? Increasing input size increases the complexity We tend to omit the constants because they have no effect with large inputs Everything is based on estimation T(n) ≈ c op C(n)

Empirical analysis of time efficiency Select a specific (typical) sample of inputs Use physical unit of time (e.g., milliseconds) or Count actual number of basic operation’s executions Analyze the empirical/experimental data

Cases to consider in Analysis Best-case, average-case, worst-case For some algorithms, efficiency depends on form of input: Worst case: C worst (n) – maximum over inputs of size n Best case: C best (n) – minimum over inputs of size n Average case: C avg (n) – “average” over inputs of size n The toughest to do

Best-case, average-case, worst-case Average case: C avg (n) – “average” over inputs of size n Determine the number of different groups into which all possible input sets can be divided. Determine the probability that the input will come from each of these groups. Determine how long the algorithm will run for each of these groups. n is the size of the input, m is the number of groups, pi is the probability that the input will be from group i, ti is the time that the algorithm takes for input from group i.

Example: Sequential search Worst case Best case Average case n key comparisons 1 comparison (n+1)/2, assuming K is in A

Computing the Average Case for the Sequential search Neither the Worst nor the Best case gives the yield to the actual performance of an algorithm with random input. The Average Case does Assume that: The probability of successful search is equal to p(0≤ p ≤1) The probability of the first match occurring in the ith position is the same for every i. The probability of a match occurs at i th position is p/n for every i In the case of unsuccessful search, the number of comparison is n with probability (1-p).

Computing the Average Case for the Sequential search If p =1 If p =1 (I found the key k) (n+1)/2 The average number of comparisons is (n+1)/2 If p=0 If p=0 n The average number of key comparisons is n The average Case is more difficult than the Best and Worst cases

25 Mathematical Background

Logarithms

Which Base ? Log a n = Log a b Log b n Log a n = c Log b n In terms of complexity, we tend to ignore the constant

Mathematical Background

Types of formulas for basic operation’s count Exact formula e.g., C(n) = n(n-1)/2 Formula indicating order of growth with specific multiplicative constant e.g., C(n) ≈ 0.5 n 2 Formula indicating order of growth with unknown multiplicative constant e.g., C(n) ≈ cn 2

32 Order of growth

Of greater concern is the rate of increase in operations for an algorithm to solve a problem as the size of the problem increases. This is referred to as the rate of growth of the algorithm.

Order of growth The function based on x 2 increases slowly at first, but as the problem size gets larger, it begins to grow at a rapid rate. The functions that are based on x both grow at a steady rate for the entire length of the graph. The function based on log x seems to not grow at all, but this is because it is actually growing at a very slow rate.

Values of some important functions as n  

Order of growth Second point to consider : Because the faster growing functions increase at such a significant rate, they quickly dominate the slower-growing functions. This means that if we determine that an algorithm’s complexity is a combination of two of these classes, we will frequently ignore all but the fastest growing of these terms. Example : if the complexity is we tend to ignore 30x term

Classification of Growth Asymptotic order of growth A way of comparing functions that ignores constant factors and small input sizes. O(g(n)): class of functions f(n) that grow no faster than g(n) All functions with smaller or the same order of growth as g(n) Ω(g(n)): class of functions f(n) that grow at least as fast as g(n) All functions that are larger or have the same order of growth as g(n) Θ(g(n)): class of functions f(n) that grow at same rate as g(n) Set of functions that have the same order of growth as g(n)

Big-oh O(g(n)): class of functions t(n) that grow no faster than g(n) if there exist some positive constant c and some nonnegative n0 such that You may come up with different c and n0

Big-omega Ω(g(n)): class of functions t(n) that grow at least as fast as g(n)

Big-theta Θ(g(n)): class of functions t(n) that grow at same rate as g(n)

 (g(n)), functions that grow at least as fast as g(n)  (g(n)), functions that grow at the same rate as g(n) O(g(n)), functions that grow no faster than g(n) g(n) >= <= = Summary

Theorem If t 1 (n)  O(g 1 (n)) and t 2 (n)  O(g 2 (n)), then t 1 (n) + t 2 (n)  O(max{g 1 (n), g 2 (n)}). The analogous assertions are true for the  -notation and  - notation. Implication: The algorithm’s overall efficiency will be determined by the part with a larger order of growth, i.e., its least efficient part. For example, 5n 2 + 3nlogn  O(n 2 ) Proof. There exist constants c1, c2, n1, n2 such that t1(n)  c1*g1(n), for all n  n1 t2(n)  c2*g2(n), for all n  n2 t2(n)  c2*g2(n), for all n  n2 Define c3 = c1 + c2 and n3 = max{n1,n2}. Then t1(n) + t2(n)  c3*max{g1(n), g2(n)}, for all n  n3 t1(n) + t2(n)  c3*max{g1(n), g2(n)}, for all n  n3

Some properties of asymptotic order of growth f(n)  O(f(n)) f(n)  O(g(n)) iff g(n)  (f(n)) If f (n)  O(g (n)) and g(n)  O(h(n)), then f(n)  O(h(n)) If f 1 (n)  O(g 1 (n)) and f 2 (n)  O(g 2 (n)), then f 1 (n) + f 2 (n)  O(max{g 1 (n), g 2 (n)}) Also,  1  i  n  (f(i)) =  (  1  i  n f(i))

Orders of growth of some important functions All logarithmic functions log a n belong to the same class  (log n) no matter what the logarithm’s base a > 1 is because All polynomials of the same degree k belong to the same class: a k n k + a k-1 n k-1 + … + a 0   (n k ) Exponential functions a n have different orders of growth for different a’s

Basic asymptotic efficiency classes 1constant log n logarithmic nlinear n log n n-log-n n2n2n2n2quadratic n3n3n3n3cubic 2n2n2n2nexponential n!n!n!n!factorial