Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer Science 320 Measuring Sizeup. Speedup vs Sizeup If we add more processors, we should be able to solve a problem of a given size faster If we.

Similar presentations


Presentation on theme: "Computer Science 320 Measuring Sizeup. Speedup vs Sizeup If we add more processors, we should be able to solve a problem of a given size faster If we."— Presentation transcript:

1 Computer Science 320 Measuring Sizeup

2 Speedup vs Sizeup If we add more processors, we should be able to solve a problem of a given size faster If we add more processors, we should be able to increase the size of a problem that we can solve in a given amount of time

3 Speedup vs Sizeup T(N, K) says that the running time T is a function of the problem size N and the number of processors K N(T, K) says that the problem size N is a function of the running time Tand the number of processors K

4 What Is Sizeup? Sizeup is the size of a parallel version running on K processors relative to a sequential version running on one processor Sizeup(T, K) = N par (T, K) / N seq (T, 1) Ideally, linear with K

5 What Is Sizeup Efficiency? SizeupEff(T, K) = Sizeup (T, K) / K Usually a fraction < 1

6 Gustafson’s Law The sequential portion of a parallel program puts an upper bound on the efficiency it can achieve Don’t run a problem of the same size on more and more processors Scale up the problem size as running time stays the same

7 Gustafson’s Law Determine what the running time would be on a single processor with the larger problem size attained by using K processors, where T(N, K) is always the same T(N, 1) = F * T (N, K) + K * (1 – F) * T(N, K)

8 Speedup and Efficiency T(N, 1) = F * T (N, K) + K * (1 – F) * T(N, K) Speedup(N, K) = F + K – K * F Eff(N, K) = F / K + 1 - F As K increases, speedup continues increasing without limit, and efficiency becomes 1 – F and K goes to infinity Unlike Amdahl, who says speedup approaches 1 / F and efficiency approaches 0 as K increases

9 Different Assumptions Amdahl: defines the sequential fraction F with respect to the running time on one processor Gustafson: defines the sequential fraction F with respect to the running time on K processors

10 Problem Size Laws Running time is constant, but N varies; the running time model with model parameters a and d is T(N, K) = a + 1 / k * d * N Solve for N to get the problem size model: N(T, K) = 1 / d * K * (T – a) This is the First Problem Size Law

11 Ideal Sizeup and Efficiency N(T, K) = 1 / d * K * (T – a) Using the First Problem Size Law to determine sizeup and efficiency, we get Sizeup(T, K) = K SizeupEff(N, K) = 1

12 Realistic Sizeup and Efficiency The sequential portion’s running time does increase as N goes up T(N, K) = (a + b * N) + 1 / k * (c +d * N) N(T, K) = (K * T – K * a – c) / (K * b + d) This is the Second Problem Size Law Then Sizeup(T, K) = (K * G + K) / (K * G + 1), where G = b / d lim SizeupEff(N, K) = 1 + 1 / G

13 Sizeup or Speedup? Fine-tune and test speedup during development Focus on sizeup during operation


Download ppt "Computer Science 320 Measuring Sizeup. Speedup vs Sizeup If we add more processors, we should be able to solve a problem of a given size faster If we."

Similar presentations


Ads by Google