Presentation on theme: "Analyzing Algorithms and Problems Prof. Sin-Min Lee Department of Computer Science."— Presentation transcript:
Analyzing Algorithms and Problems Prof. Sin-Min Lee Department of Computer Science
Euclid's Algorithm In Euclid's 7th book in the Elements series (written about 300BC), he gives an algorithm to calculate the highest common factor (largest common divisor) of two numbers x and y where (x < y). This can be stated as: 1.Divide y by x with remainder r. 2.Replace y by x, and x with r. 3.Repeat step 1 until r is zero. When this algorithm terminates, y is the highest common factor.
Euclid's Algorithm Euclid's Algorithm determines the greatest common divisor of two natural numbers a, b. That is, the largest natural number d such that d | a and d | b. GCD(33,21)=3 33 = 1*21 + 12 21 = 1*12 + 9 12 = 1*9 + 3 9 = 3*3
How do you know if you have the best algorithm? A lower bound can be established because a basic number of operations must be performed. Thus, any algorithm that performs that number would be optimal, even if no such algorithm has been found that uses that number of steps.
Hanoi Tower Long time ago, there were three towers in Honoi. In the very beginning, there were 64 disks with different sizes stacking on the A tower, small disks always stepping on the bigger disks. A monk had to move all disks to the C tower following rules: 1. Each time only one disk could be moved from a tower, to any other; 2. At anytime, small disks should step on the bigger disks, not allowing big disks step on smaller disks.
The Tower of Hanoi (sometimes referred to as the Tower of Brahma or the End of the World Puzzle) was invented by the French mathematician, Edouard Lucas, in 1883. He was inspired by a legend that tells of a Hindu temple where the pyramid puzzle might have been used for the mental discipline of young priests. Legend says that at the beginning of time the priests in the temple were given a stack of 64 gold disks, each one a little smaller than the one beneath it. Their assignment was to transfer the 64 disks from one of the three poles to another, with one important proviso—a large disk could never be placed on top of a smaller one. The priests worked very efficiently, day and night. When they finished their work, the myth said, the temple would crumble into dust and the world would vanish.
. Count the number of basic operations performed by the algorithm on the worst-case input A basic operation could be: An assignment A comparison between two variables An arithmetic operation between two variables. The worst-case input is that input assignment for which the most basic operations are performed.
Algorithm Efficiency Determining the efficiency of an Algorithm is done by counting the number of basic operations that are carried out when the algorithm is executed A basic operation is an operation carried out by the algorithm that is language independent, such as addition, subtraction, comparison of two values, etc…
Determination of Efficiency How to determine the number of basic operations per every input? Impossible!!! Instead we devise a way to compare algorithms to each other in a relative way in order to get a “feel” for their efficiency. We do this by gauging the algorithm’s Asymptotic Growth Rate
Asymptotic Notation for Big O Big O (Omicron) can be defined as Note: the following two conditions must be met: The limit must exist The limit c <
Comparison of Ω, Θ, and Ο Ο(g): functions that grow no faster than g Θ(g): functions grow at the same rate as g Ω(g): functions grow at least as fast as g –Together Big O and set establish the lower and upper boundaries of asymptotic order. – (g) = (g) (g)
Notes about Asymptotic Notation Big O notation should be of the smallest order possible Lower order and constant terms can be dropped as long as a larger term exists that does not have a coefficient of 0
The Importance of Asymptotic Order An algorithm that runs at O(n) time is generally better than an algorithm that runs at O(n ). 2
Clarification When talking about Big O, , , and little o, , we do not talk about specific functions. (that is, there is no function that is exclusively Big O of something else) What we mean when we say that function f is big O of function g is that the function f is a member of the set of functions which satisfy the given criteria for big O. The proper notation is to say that f is an element in the set of functions which is identified as O(g), or more explicitly –f O(g), –Remember, f O(g)
Big O Notation Big O (omicron) notation is used to define a set of functions which grow no faster than a given function Let g be a function from the nonnegative integers to the positive real numbers
Definition of big O O(g) is the set of all functions f from nonnegative integers into positive real numbers such that the limit of f over g as n approaches is equal to a constant c such that c <
Example of big O (1) You have three functions g: y =.5x f1: y =.2x f2: y = 2x You want to check if f1 and f2 are O of g
Example of big O (2) Both values are less than Both f1 and f2 are elements of O(g)
Example of big O (3) In both cases as the initial value increased, the limit of the ratio between f1, f2 and g remained less than f1 and f2 are O(g) or f1 and f2 are O of g. In practice, this means that the functions in question (in our case f1 and f2) are growing at a rate that is not faster than g.
When Big O shows a difference Pink g g = 1,000,000n Yellow f f = e n
Summing up Big O In order for the following two conditions must be met –The limit must exist –The limit c <
The Omega set The Omega set ( ) is the “big brother” of Big O of g is the set of functions such that: