Presentation is loading. Please wait.

Presentation is loading. Please wait.

CEG 221 Lesson 5: Algorithm Development II Mr. David Lippa.

Similar presentations


Presentation on theme: "CEG 221 Lesson 5: Algorithm Development II Mr. David Lippa."— Presentation transcript:

1 CEG 221 Lesson 5: Algorithm Development II Mr. David Lippa

2 Overview Algorithm Development II –Review of basic algorithm development –Advanced algorithm development Optimization of algorithm Optimization of code Questions

3 What is an Algorithm? An algorithm is a high-level set of clear, step-by-step actions taken in order to solve a problem, frequently expressed in English or pseudo code. Examples of Algorithms: –Computing the remaining angles and side in an SAS Triangle –Computing an integral using rectangle approximation method (RAM) or the Trapezoidal Rule

4 Example: Triangulation with SAS If we return to the SAS triangle, there’s nothing really much to be done to improve speed or efficiency, as the computation is very straightforward. B a = 60 42° b = 100 A C

5 Example: Trapezoidal Rule You notice that we compute f(x) more times than is necessary for all the inner values Let’s compute SUM2 = 2 * ( f(0) + f(0.25) + f(0.5) + f(0.75) + f(1.0) ) – f(0) – f(1.0). AREA2 = 0.5 * SUM2 4 trapezoids  8 computations of f 8 trapezoids  12 computations of f 1024 trapezoids  1028 computations If the interval here is [0, 1], then we need to compute: SUM = ( f(0) + f(0.25) + f(0.25) + f(0.5) + f(0.5) + f(0.75) + f(0.75) + f(1.0) ). Then, AREA = 0.5 * SUM. 4 trapezoids  9 computations of f 8 trapezoids  15 computations of f 1024 trapezoids  2047 computations Notice a pattern?

6 Trapezoidal Rule Improvements For a small number of trapezoids, this method is slightly more work. For, say 1024 trapezoids, this is significantly more efficient in terms of number of mathematical calculations. CONCLUSION: Given that greater accuracy comes with more trapezoids, this optimization is sufficient, since this algorithm will rarely be used with few trapezoids.

7 Optimizing Implemented Code There are other ways to speed up code –Sacrifice memory for improved speed (ie. Always try to work from memory, not from disk) –Avoid algorithms where the ratio of work required to number of elements processed is n, namely an n 2 algorithm. –Pass by reference or pointer where appropriate to prevent unnecessary memory copies of large structures –Use algorithm analysis to try to find the cause of the lack of speed

8 Algorithm Analysis Big-Oh notation – how much work is required to process n inputs in terms of n –Constants are less important for Big-Oh notation –O(1), O(log 2 n), O(n), O(n log 2 n), O(n 2 ), O(n 3 ), O(2 n ), O(n!) –Associate algorithms with each Matrix, Integration, SAS, factorial Formal definition

9 Using Algorithm Analysis Analyze an algorithm by computing the number of operations performed per unit input Avoid O(n 2 ) or worse algorithms Convert code to pseudo code if needed, to do a theoretical analysis

10 Algorithm Analysis: Example Matrix Multiplication –Pseudo code – To multiply an m x n matrix [A] and an n x p matrix [B], dot product each row of [A] with each column of [B]. Results in m * p dot products (see previous notes for pseudo code details) Dot product –Pseudo code – to dot product two vectors, multiply the first element of each, the second, the third, and so on and add them all together Results in n multiplication and addition operations (see previous notes for pseudo code details) RESULT: Matrix multiplication is an O(m * n * p) operation. With square matrices, it is O(n 3 )

11 Next Time Building Libraries Using Libraries Tradeoffs Questions

12 Questions?


Download ppt "CEG 221 Lesson 5: Algorithm Development II Mr. David Lippa."

Similar presentations


Ads by Google