Winter 2014Parallel Processing, Fundamental ConceptsSlide 1 3 Parallel Algorithm Complexity Review algorithm complexity and various complexity classes:

Slides:



Advertisements
Similar presentations
Discrete Structures CISC 2315
Advertisements

Razdan with contribution from others 1 Algorithm Analysis What is the Big ‘O Bout? Anshuman Razdan Div of Computing.
I Advanced Algorithms Analysis. What is Algorithm?  A computer algorithm is a detailed step-by-step method for solving a problem by using a computer.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2001 Midterm Review Fri. Oct 26.
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
Chapter 3 Growth of Functions
Asymptotic Growth Rate
Introduction to Analysis of Algorithms
Eleg667/2001-f/Topic-1a 1 A Brief Review of Algorithm Design and Analysis.
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 2. Analysis of Algorithms - 1 Analysis.
25 June 2015Comp 122, Spring 2004 Asymptotic Notation, Review of Functions & Summations.
Algorithm Analysis CS 201 Fundamental Structures of Computer Science.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
DATA STRUCTURES AND ALGORITHMS Lecture Notes 1 Prepared by İnanç TAHRALI.
1.1 Chapter 1: Introduction What is the course all about? Problems, instances and algorithms Running time v.s. computational complexity General description.
1 Chapter 2 Program Performance – Part 2. 2 Step Counts Instead of accounting for the time spent on chosen operations, the step-count method accounts.
Time Complexity Dr. Jicheng Fu Department of Computer Science University of Central Oklahoma.
CS Algorithm Analysis1 Algorithm Analysis - CS 312 Professor Tony Martinez.
Program Performance & Asymptotic Notations CSE, POSTECH.
Week 2 CS 361: Advanced Data Structures and Algorithms
Lecture 2 Computational Complexity
Mathematics Review and Asymptotic Notation
CSC 201 Analysis and Design of Algorithms Lecture 04: CSC 201 Analysis and Design of Algorithms Lecture 04: Time complexity analysis in form of Big-Oh.
1 Computer Algorithms Lecture 3 Asymptotic Notation Some of these slides are courtesy of D. Plaisted, UNC and M. Nicolescu, UNR.
A.Broumandnia, 1 3 Parallel Algorithm Complexity Review algorithm complexity and various complexity classes: Introduce the notions.
2.1 Computational Tractability. 2 Computational Tractability Charles Babbage (1864) As soon as an Analytic Engine exists, it will necessarily guide the.
Analysis of Algorithms
Chapter 3 Sec 3.3 With Question/Answer Animations 1.
CS 221 Analysis of Algorithms Instructor: Don McLaughlin.
Analysis of Algorithms These slides are a modified version of the slides used by Prof. Eltabakh in his offering of CS2223 in D term 2013.
Asymptotic Analysis-Ch. 3
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
Introduction to Analysis of Algorithms COMP171 Fall 2005.
Algorithms Growth of Functions. Some Notation NNatural numbers RReal numbers N + Positive natural numbers R + Positive real numbers R * Non-negative real.
Algorithm Analysis (Algorithm Complexity). Correctness is Not Enough It isn’t sufficient that our algorithms perform the required tasks. We want them.
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
MS 101: Algorithms Instructor Neelima Gupta
Fall 2010Parallel Processing, Fundamental ConceptsSlide Asymptotic Complexity Fig. 3.1 Graphical representation of the notions of asymptotic complexity.
3.3 Complexity of Algorithms
Introduction to Analysis of Algorithms CS342 S2004.
1 Algorithms  Algorithms are simply a list of steps required to solve some particular problem  They are designed as abstractions of processes carried.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
LIMITATIONS OF ALGORITHM POWER
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
Complexity of Algorithms Fundamental Data Structures and Algorithms Ananda Guna January 13, 2005.
Ch03-Algorithms 1. Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a.
GC 211:Data Structures Week 2: Algorithm Analysis Tools Slides are borrowed from Mr. Mohammad Alqahtani.
LECTURE 2 : fundamentals of analysis of algorithm efficiency Introduction to design and analysis algorithm 1.
Design and Analysis of Algorithms Faculty Name : Ruhi Fatima Course Description This course provides techniques to prove.
Data Structures & Algorithm CS-102 Lecture 12 Asymptotic Analysis Lecturer: Syeda Nazia Ashraf 1.
Data Structures I (CPCS-204) Week # 2: Algorithm Analysis tools Dr. Omar Batarfi Dr. Yahya Dahab Dr. Imtiaz Khan.
Algorithm Analysis 1.
CMPT 438 Algorithms.
Advanced Algorithms Analysis and Design
Analysis of Algorithms
GC 211:Data Structures Week 2: Algorithm Analysis Tools
GC 211:Data Structures Algorithm Analysis Tools
Growth of functions CSC317.
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
Algorithm Analysis (not included in any exams!)
Objective of This Course
Introduction to Algorithms Analysis
Chapter 2.
At the end of this session, learner will be able to:
Introduction To Algorithms
Estimating Algorithm Performance
Complexity Theory: Foundations
Presentation transcript:

Winter 2014Parallel Processing, Fundamental ConceptsSlide 1 3 Parallel Algorithm Complexity Review algorithm complexity and various complexity classes: Introduce the notions of time and time/cost optimality Derive tools for analysis, comparison, and fine-tuning Topics in This Chapter 3.1 Asymptotic Complexity 3.2 Algorithms Optimality and Efficiency 3.3 Complexity Classes 3.4 Parallelizable Tasks and the NC Class 3.5 Parallel Programming Paradigms 3.6 Solving Recurrences

Winter 2014Parallel Processing, Fundamental ConceptsSlide Asymptotic Complexity Fig. 3.1 Graphical representation of the notions of asymptotic complexity. f(n) = O(g(n)) f(n) =  (g(n)) f(n) =  (g(n)) 3n log n = O(n 2 ) ½ n log 2 n =  (n)3n n =  (n 2 )

Winter 2014Parallel Processing, Fundamental ConceptsSlide 3 Little Oh, Big Oh, and Their Buddies NotationGrowth rateExample of use f(n) = o(g(n))strictly less thanT(n) = cn 2 + o(n 2 ) f(n) = O(g(n))no greater thanT(n, m ) = O(n log n + m ) f(n) =  (g(n))the same asT(n) =  (n log n) f(n) =  (g(n))no less thanT(n, m ) =  (  n + m 3/2 ) f(n) =  (g(n))strictly greater thanT(n) =  (log n) 

Winter 2014Parallel Processing, Fundamental ConceptsSlide 4 Some Commonly Encountered Growth Rates Notation Class nameNotes O(1) ConstantRarely practical O(log log n) Double-logarithmicSublogarithmic O(log n) Logarithmic O(log k n) Polylogarithmick is a constant O(n a ), a < 1 e.g., O(n 1/2 ) or O(n 1–  ) O(n / log k n)Still sublinear O(n) Linear O(n log k n) Superlinear O(n c ), c > 1 Polynomiale.g., O(n 1+  ) or O(n 3/2 ) O(2 n ) ExponentialGenerally intractable O(2 2 n ) Double-exponentialHopeless!

Winter 2014Parallel Processing, Fundamental ConceptsSlide 5 Complexity History of Some Real Problems Examples from the book Algorithmic Graph Theory and Perfect Graphs [GOLU04]: Complexity of determining whether an n-vertex graph is planar ExponentialKuratowski1930 O(n 3 )Auslander and Porter1961 Goldstein1963 Shirey1969 O(n 2 )Lempel, Even, and Cederbaum1967 O(n log n)Hopcroft and Tarjan1972 O(n)Hopcroft and Tarjan1974 Booth and Leuker1976 A second, more complex example: Max network flow, n vertices, e edges: ne 2  n 2 e  n 3  n 2 e 1/2  n 5/3 e 2/3  ne log 2 n  ne log(n 2 /e)  ne + n 2+   ne log e/(n log n) n  ne log e/n n + n 2 log 2+  n

Winter 2014Parallel Processing, Fundamental ConceptsSlide 6  Suppose that we have constructed a valid algorithm to solve a given problem of size n in g(n) time, where g(n) is a known function such as n log 2 n or n ²,obtained through exact or asymptotic analysis.  A question of interest is whether or not the algorithm at hand is the best algorithm for solving the problem? 3.2. Algorithm Optimality And Efficiency

Winter 2014Parallel Processing, Fundamental ConceptsSlide Algorithm Optimality And Efficiency What is the running timeƒ(n) of the fastest algorithm for solving this problem? Of course, algorithm quality can be judged in many different ways,such as: running time resource requirements simplicity (which affects the cost of development, debugging, and maintenance portability

Winter 2014Parallel Processing, Fundamental ConceptsSlide 8  If we are interested in asymptotic comparison, then because an algorithm with running time g(n) is already known, ƒ(n) =O(g(n)); i.e., for large n, the running time of the best algorithm is upper bounded by cg(n) for some constant c.  If, subsequently, someone develops an asymptotically faster algorithm for solving the same problem, say in time h(n), we conclude that f(n)=O(h(n)).  The process of constructing and improving algorithms thus contributes to the establishment of tighter upper bounds for the complexity of the best algorithm 3.2. Algorithm Optimality And Efficiency

Winter 2014Parallel Processing, Fundamental ConceptsSlide 9  On currently with the establishment of upper bounds as discussed above, we might work on determining lower bounds on a problem's time complexity.  A lower bound is useful as it tells us how much room for improvement there might be in existing algorithms Algorithm Optimality And Efficiency

Winter 2014Parallel Processing, Fundamental ConceptsSlide 10 1.In the worst case, solution of the problem requires data to travel a certain distance or that a certain volume of data must pass through a limited bandwidth interface.  The second method : is exemplified by the worst-case linear time required by any sorting algorithm on a binary tree architecture (bisection-based lower bound). An example of he first method is the observation algorithm on a p-processor square mesh needs at least 2  p-2 communication steps in the worst case. (Diameter based lower bound) 3.2. Algorithm Optimality And Efficiency

Winter 2014Parallel Processing, Fundamental ConceptsSlide 11 2.In the worst case, solution of the problem requires that a certain number of elementary operations be performed. This is the method used for establishing the Ω(n log n) lower bound for comparison-based sequential sorting algorithms. 3. Showing that any instance of a previously analyzed problem can be converted to an instance of the problem under study, so that an algorithm for solving our problem can also be used, with simple pre and post processing steps, to solve the previous problem Algorithm Optimality And Efficiency

Winter 2014Parallel Processing, Fundamental ConceptsSlide 12 Fig. 3.2 Upper and lower bounds may tighten over time. Upper bounds: Deriving/analyzing algorithms and proving them correct Lower bounds: Theoretical arguments based on bisection width, and the like 3.2. Algorithm Optimality And Efficiency

Winter 2014Parallel Processing, Fundamental ConceptsSlide 13 Some Notions of Algorithm Optimality Time optimality (optimal algorithm, for short) T(n, p) = g(n, p), where g(n, p) is an established lower bound Cost-time optimality (cost-optimal algorithm, for short) pT(n, p) = T(n, 1); i.e., redundancy = utilization = 1 Cost-time efficiency (efficient algorithm, for short) pT(n, p) =  (T(n, 1)); i.e., redundancy = utilization =  (1) Problem sizeNumber of processors

Winter 2014Parallel Processing, Fundamental ConceptsSlide Complexity Classes  Problems whose running times are upper bounded by polynomials in n are said to belong to the P class and are generally considered to be tractable.  Even if the polynomial is of a high degree, such that a large problem requires years of computation on the fastest available supercomputer.  In complexity theory, problems are divided into several complexity classes according to their running times on a single-processor system (or a deterministic Turing machine, to be more exact).

Winter 2014Parallel Processing, Fundamental ConceptsSlide 15 For example, if solving a problem of size n requires the execution of 2n machine instructions, the running time for n= 100 on a GIPS (Giga IPS) processor will be around 400 billion centuries!  problems for which the best known deterministic algorithm runs in exponential time are intractable. A problem of this kind for which, when given a solution, the correctness of the solution can be verified in polynomial time, is said to belong to the NP (nondeterministic polynomial) class Complexity Classes

Winter 2014Parallel Processing, Fundamental ConceptsSlide 16 Figure 3.4. A conceptual view of complexity classes and their relationships 3.3. Complexity Classes

Winter 2014Parallel Processing, Fundamental ConceptsSlide Parallelizable Tasks And The NC Class A problem that takes 400 billion centuries to solve on a uniprocessor, would still take 400 centuries even if it can be perfectly parallelized over 1 billion processors. Again, this statement does not refer to specific instances of the problem but to a general solution for all instances. parallel processing is generally of no avail for solving NP problems.  Thus, parallel processing is primarily useful for speeding up the execution time of the problems in P.

Winter 2014Parallel Processing, Fundamental ConceptsSlide 18 Efficiently parallelizable problems in P might be defined as those problems that can be solved in a time period that is at most poly logarithmic in the problem size n, This class of problems was later named Nick’s Class (NC) in his honor. The class NC has been extensively studied and forms a foundation for parallel complexity theory. i.e.,T(p) = O(log k n) for some constant k, using no more than a polynomial number p =O(n l ) of processors Parallelizable Tasks And The NC Class

Winter 2014Parallel Processing, Fundamental ConceptsSlide Parallel Programming Paradigms  Divide and conquer Decompose problem of size n into smaller problems; solve sub problems independently; combine sub problem results into final answer. T(n) =T d (n) +T s +T c (n)  Randomization When it is impossible or difficult to decompose a large problem into sub problems with equal solution times, one might use random decisions that lead to good results with very high probability. Example: sorting with random sampling  Approximation Iterative numerical methods may use approximation to arrive at solution(s). Example: Solving linear systems using Jacobi relaxation. Under proper conditions, the iterations converge to the correct solutions; more iterations  greater accuracy

Winter 2014Parallel Processing, Fundamental ConceptsSlide 20 1.Random search: When a large space must be searched for an element with certain desired properties, and it is known that such elements are abundant, random search can lead to very good average- case performance. The other randomization methods are: 2. Control randomization: To avoid consistently experiencing close to worst-case performance with one algorithm, related to some unfortunate distribution of inputs, the algorithm to be applied for solving a problem, or an algorithm parameter, can be chosen at random. 3.5 Parallel Programming Paradigms

Winter 2014Parallel Processing, Fundamental ConceptsSlide Symmetry breaking: Interacting deterministic processes may exhibit a cyclic behavior that leads to deadlock (akin to two people colliding when they try to exit a room through a narrow door, backing up, and then colliding again). Randomization can be used to break the symmetry and thus the deadlock. 3.5 Parallel Programming Paradigms

Winter 2014Parallel Processing, Fundamental ConceptsSlide Solving Recurrences f(n)= f(n/2) + 1 {rewrite f(n/2) as f((n/2)/2 + 1} = f(n/4) = f(n/8) = f(n/n) log 2 n times = log 2 n =  (log n) This method is known as unrolling f(n)= f(n – 1) + n {rewrite f(n – 1) as f((n – 1) – 1) + n – 1} = f(n – 2) + n – 1 + n = f(n – 3) + n – 2 + n – 1 + n... = f(1) n – 1 + n = n(n + 1)/2 – 1 =  (n 2 ) In all examples below, ƒ(1) = 0 is assumed.

Winter 2014Parallel Processing, Fundamental ConceptsSlide 23 More Example of Recurrence Unrolling f(n)= f(n/2) + n = f(n/4) + n/2 + n = f(n/8) + n/4 + n/2 + n... = f(n/n) n/4 + n/2 + n = 2n – 2 =  (n) f(n)= 2f(n/2) + 1 = 4f(n/4) = 8f(n/8) = n f(n/n) + n/ = n – 1 =  (n)

Winter 2014Parallel Processing, Fundamental ConceptsSlide 24 Still More Examples of Unrolling f(n)= f(n/2) + log 2 n = f(n/4) + log 2 (n/2) + log 2 n = f(n/8) + log 2 (n/4) + log 2 (n/2) + log 2 n... = f(n/n) + log log log 2 (n/2) + log 2 n = log 2 n = log 2 n (log 2 n + 1)/2 =  (log 2 n) f(n)= 2f(n/2) + n = 4f(n/4) + n + n = 8f(n/8) + n + n + n... = n f(n/n) + n + n + n n log 2 n times = n log 2 n =  (n log n)

Winter 2014Parallel Processing, Fundamental ConceptsSlide 25 Master Theorem for Recurrences Theorem 3.1: Given f(n) = a f(n/b) + h(n); a, b constant, h arbitrary function the asymptotic solution to the recurrence is (c = log b a) f(n) =  (n c )if h(n) = O(n c –  ) for some  > 0 f(n) =  (n c log n)if h(n) =  (n c ) f(n) =  (h(n))if h(n) =  (n c +  ) for some  > 0 Example: f(n) = 2 f(n/2) + 1 a = b = 2; c = log b a = 1 h(n) = 1 = O( n 1 –  ) f(n) =  (n c ) =  (n)

Winter 2014Parallel Processing, Fundamental ConceptsSlide 26 The End