Presentation is loading. Please wait.

Presentation is loading. Please wait.

CODE TUNING AND OPTIMIZATION Kadin Tseng Boston University Scientific Computing and Visualization.

Similar presentations


Presentation on theme: "CODE TUNING AND OPTIMIZATION Kadin Tseng Boston University Scientific Computing and Visualization."— Presentation transcript:

1 CODE TUNING AND OPTIMIZATION Kadin Tseng Boston University Scientific Computing and Visualization

2 Outline Introduction Timing Example Code Profiling Cache Tuning Parallel Performance Code Tuning and Optimization 2

3 Introduction Timing Where is most time being used? Tuning How to speed it up Often as much art as science Parallel Performance How to assess how well parallelization is working Code Tuning and Optimization 3

4 Timing Code Tuning and Optimization 4

5 Timing When tuning/parallelizing a code, need to assess effectiveness of your efforts Can time whole code and/or specific sections Some types of timers unix time command function/subroutine calls profiler Code Tuning and Optimization 5

6 CPU Time or Wall-Clock Time? CPU time How much time the CPU is actually crunching away User CPU time Time spent executing your source code System CPU time Time spent in system calls such as i/o Wall-clock time What you would measure with a stopwatch Code Tuning and Optimization 6

7 CPU Time or Wall-Clock Time? (contd) Both are useful For serial runs without interaction from keyboard, CPU and wall-clock times are usually close If you prompt for keyboard input, wall-clock time will accumulate if you get a cup of coffee, but CPU time will not Code Tuning and Optimization 7

8 CPU Time or Wall-Clock Time? (3) Parallel runs Want wall-clock time, since CPU time will be about the same or even increase as number of procs. is increased Wall-clock time may not be accurate if sharing processors Wall-clock timings should always be performed in batch mode Code Tuning and Optimization 8

9 Unix Time Command easiest way to time code simply type time before your run command output differs between c-type shells (cshell, tcshell) and Bourne-type shells (bsh, bash, ksh) Code Tuning and Optimization 9

10 Unix Time Command (contd) katana:~ % time mycode 1.570u 0.010s 0: % k 0+0io 64pf+0w user CPU time (s) system CPU time (s) wall-clock time (s) (u+s)/wc avg. shared + unshared text space input + output operations page faults + no. times proc. was swapped Code Tuning and Optimization 10

11 Unix Time Command (3) Bourne shell results $ time mycode real 0m1.62s user 0m1.57s sys 0m0.03s wall-clock time user CPU time system CPU time Code Tuning and Optimization 11

12 Example Code Code Tuning and Optimization 12

13 Example Code Simulation of response of eye to stimuli (CNS Dept.) Based on Grossberg & Todorovic paper Contains 6 levels of response Our code only contains levels 1 through 5 Level 6 takes a long time to compute, and would skew our timings! Code Tuning and Optimization 13

14 Example Code (contd) All calculations done on a square array Array size and other constants are defined in gt.h (C) or in the mods module at the top of the code (Fortran) Code Tuning and Optimization 14

15 Level 1 Equations Computational domain is a square Defines square array I over domain (initial condition) bright dark Code Tuning and Optimization 15

16 Level 2 Equations I pq =initial condition Code Tuning and Optimization 16

17 Level 3 Equations Code Tuning and Optimization 17

18 Level 4 Equations Code Tuning and Optimization 18

19 Level 5 Equation Code Tuning and Optimization 19

20 Exercise 1 Copy files from /scratch disc Katana% cp /scratch/kadin/tuning/*. Choose C (gt.c and gt.h) or Fortran (gt.f90) Compile with no optimization: pgcc –O0 –o gt gt.cc pgf90 –O0 –o gt gt.f90 Submit rungt script to batch queue katana% qsub -b y rungt capital oh small oh zero Code Tuning and Optimization 20

21 Exercise 1 (contd) Check status qstat –u username After run has completed a file will appear named rungt.o??????, where ?????? represents the process number File contains result of time command Write down wall-clock time Re-compile using –O3 Re-run and check time Code Tuning and Optimization 21

22 Function/Subroutine Calls often need to time part of code timers can be inserted in source code language-dependent Code Tuning and Optimization 22

23 cpu_time intrinsic subroutine in Fortran returns user CPU time (in seconds) no system time is included real :: t1, t2 call cpu_time(t1) ! Start timer... perform computation here... call cpu_time(t2) ! Stop timer print*, 'CPU time = ', t2-t1, ' sec.' Code Tuning and Optimization 23

24 system_clock intrinsic subroutine in Fortran good for measuring wall-clock time Code Tuning and Optimization 24

25 system_clock (contd) t1 and t2 are tic counts count_rate is optional argument containing tics/sec. integer :: t1, t2, count_rate call system_clock(t1, count_rate) ! Start clock... perform computation here... call system_clock(t2) ! Stop clock print*,'wall-clock time = ', & real(t2-t1)/real(count_rate), sec Code Tuning and Optimization 25

26 times can be called from C to obtain CPU time #include void main(){ int tics_per_sec; float tic1, tic2; struct tms timedat; tics_per_sec = sysconf(_SC_CLK_TCK); times(&timedat); // start clock tic1 = timedat.tms_utime; … perform computation here … times(&timedat); // stop clock tic2 = timedat.tms_utime; printf("CPU time = %5.2f\n", (float)(tic2-tic1)/(float)tics_per_sec); } can also get system time with tms_stime Code Tuning and Optimization 26

27 gettimeofday can be called from C to obtain wall-clock time #include void main(){ struct timeval t; double t1, t2; gettimeofday(&t, NULL); // start clock t1 = t.tv_sec + 1.0e-6*t.tv_usec; … perform computation here … gettimeofday(&t, NULL); // stop clock t2 = t.tv_sec + 1.0e-6*t.tv_usec; printf(wall-clock time = %5.3f\n", t2-t1); } Code Tuning and Optimization 27

28 MPI_Wtime convenient wall-clock timer for MPI codes Code Tuning and Optimization 28

29 MPI_Wtime (contd) Fortran C double precision t1, t2 t1 = mpi_wtime() ! Start clock... perform computation here... t2 = mpi_wtime() ! Stop clock print*,'wall-clock time = ', t2-t1 double t1, t2; t1 = MPI_Wtime(); // start clock... perform computation here … t2 = MPI_Wtime(); // stop clock printf(wall-clock time = %5.3f\n,t2-t1); Code Tuning and Optimization 29

30 omp_get_time convenient wall-clock timer for OpenMP codes resolution available by calling omp_get_wtick() Code Tuning and Optimization 30

31 omp_get_wtime (contd) Fortran C double precision t1, t2, omp_get_wtime t1 = omp_get_wtime() ! Start clock... perform computation here... t2 = omp_get_wtime() ! Stop clock print*,'wall-clock time = ', t2-t1 double t1, t2; t1 = omp_get_wtime(); // start clock... perform computation here... t2 = omp_get_wtime(); // stop clock printf(wall-clock time = %5.3f\n,t2-t1); Code Tuning and Optimization 31

32 Timer Summary CPUWall Fortrancpu_timesystem_clock Ctimesgettimeofday MPIMPI_Wtime OpenMPomp_get_time Code Tuning and Optimization 32

33 Exercise 2 Put wall-clock timer around each level in the example code Print time for each level Compile and run Code Tuning and Optimization 33

34 PROFILING Code Tuning and Optimization 34

35 Profilers profile tells you how much time is spent in each routine gives a level of granularity not available with previous timers e.g., function may be called from many places various profilers available, e.g. gprof (GNU) -- function level profiling pgprof (Portland Group) -- function and line level profiling Code Tuning and Optimization 35

36 gprof compile with -pg when you run executable, file gmon.out will be created gprof executable > myprof this processes gmon.out into myprof for multiple processes (MPI), copy or link gmon.out.n to gmon.out, then run gprof Code Tuning and Optimization 36

37 gprof (contd) ngranularity: Each sample hit covers 4 bytes. Time: seconds % cumulative self self total time seconds seconds calls ms/call ms/call name conduct [5] getxyz [8] __mcount [9] btri [10] kickpipes [12] rmnmod [16] getq [24] Code Tuning and Optimization 37

38 gprof (3) ngranularity: Each sample hit covers 4 bytes. Time: seconds called/total parents index %time self descendents called+self name index called/total children /1.__start [2] [1] main [1] /10.contrl [3] /10.force [34] /1.initia [40] /1.plot3da [49] /1.data [73] Code Tuning and Optimization 38

39 pgprof compile with Portland Group compiler pgf90 (pgf95, etc.) pgcc –Mprof=func similar to –pg run code pgprof –exe executable pops up window with flat profile Code Tuning and Optimization 39

40 pgprof (contd) Code Tuning and Optimization 40

41 pgprof (3) To save profile data to a file: re-run pgprof using –text flag at command prompt type p > filename filename is the name you want to give the profile file type quit to get out of profiler Close pgprof as soon as youre through Leaving window open ties up a license (only a few available) Code Tuning and Optimization 41

42 Line-Level Profiling Times individual lines For pgprof, compile with the flag –Mprof=line Optimizer will re-order lines profiler will lump lines in some loops or other constructs may want to compile without optimization, may not In flat profile, double-click on function to get line-level data Code Tuning and Optimization 42

43 Line-Level Profiling (contd) Code Tuning and Optimization 43

44 CACHE Code Tuning and Optimization 44

45 Cache Cache is a small chunk of fast memory between the main memory and the registers secondary cache registers primary cache main memory Code Tuning and Optimization 45

46 Cache (contd) If variables are used repeatedly, code will run faster since cache memory is much faster than main memory Variables are moved from main memory to cache in lines L1 cache line sizes on our machines Opteron (katana cluster) 64 bytes Xeon (katana cluster) 64 bytes Power4 (p-series) 128 bytes PPC440 (Blue Gene) 32 bytes Pentium III (linux cluster) 32 bytes Code Tuning and Optimization 46

47 Cache (3) Why not just make the main memory out of the same stuff as cache? Expensive Runs hot This was actually done in Cray computers Liquid cooling system Code Tuning and Optimization 47

48 Cache (4) Cache hit Required variable is in cache Cache miss Required variable not in cache If cache is full, something else must be thrown out (sent back to main memory) to make room Want to minimize number of cache misses Code Tuning and Optimization 48

49 Cache (5) … x[0 ] x[1] x[2 ] x[3] x[4] x[5] x[6] x[7] x[8 ] x[9 ] Main memory mini cache holds 2 lines, 4 words each for(i=0; i<10; i++) x[i] = i; a b … Code Tuning and Optimization 49

50 Cache (6) … x[0 ] x[1] x[2 ] x[3] x[4] x[5] x[6] x[7] x[8 ] x[9 ] will ignore i for simplicity need x[0], not in cache cache miss load line from memory into cache next 3 loop indices result in cache hits for(i=0; i<10; i++) x[i] = i; a b … x[0 ] x[1] x[2 ] x[3] Code Tuning and Optimization 50

51 Cache (7) … x[0 ] x[1] x[2 ] x[3] x[4] x[5] x[6] x[7] x[8 ] x[9 ] need x[4], not in cache cache miss load line from memory into cache next 3 loop indices result in cache hits for(i=0; i<10; i++) x[i] = i; a b … x[0 ] x[1] x[2 ] x[3] x[4] x[5] x[6] x[7] Code Tuning and Optimization 51

52 Cache (8) … x[0 ] x[1] x[2 ] x[3] x[4] x[5] x[6] x[7] x[8 ] x[9 ] need x[8], not in cache cache miss load line from memory into cache no room in cache! replace old line for(i=0; i<10; i++) x[i] = i; a b … x[4] x[5] x[6] x[7] x[8 ] x[9 ] a b Code Tuning and Optimization 52

53 Cache (9) Contiguous access is important In C, multidimensional array is stored in memory as a[0][0] a[0][1] a[0][2] … Code Tuning and Optimization 53

54 Cache (10) In Fortran and Matlab, multidimensional array is stored the opposite way: a(1,1) a(2,1) a(3,1) … Code Tuning and Optimization 54

55 Cache (11) Rule: Always order your loops appropriately will usually be taken care of by optimizer suggestion: dont rely on optimizer for(i=0; i

56 TUNING TIPS Code Tuning and Optimization 56

57 Tuning Tips Some of these tips will be taken care of by compiler optimization Its best to do them yourself, since compilers vary Two important rules minimize number of operations access cache contiguously Code Tuning and Optimization 57

58 Tuning Tips (contd) Access arrays in contiguous order For multi-dimensional arrays, rightmost index varies fastest for C and C++, leftmost for Fortran and Matlab Bad Good for(i=0; i

59 Tuning Tips (3) Eliminate redundant operations in loops Bad: Good: for(i=0; i

60 Tuning Tips (4) Eliminate or minimize if statements within loops Bad: if may inhibit pipelining Good: for(i=0; i0 calculations } Code Tuning and Optimization 60 perform i=0 calculations for(i=1; i0 calculations }

61 Tuning Tips (5) Divides are expensive Intel x86 clock cycles per operation add 3-6 multiply4-8 divide Bad: Good: for(i=0; i

62 Tuning Tips (6) There is overhead associated with a function call Bad: Good: for(i=0; i

63 Tuning Tips (7) Minimize calls to math functions Bad: Good: for(i=0; i

64 Tuning Tips (8) recasting may be costlier than you think Bad: Good: sum = 0.0; for(i=0; i

65 Exercise 3 (not in class) The example code provided is written in a clear, readable style, that also happens to violate lots of the tuning tips that we have just reviewed. Examine the line-level profile. What lines are using the most time? Is there anything we might be able to do to make it run faster? We will discuss options as a group come up with a strategy modify code re-compile and run compare timings Re-examine line level profile, come up with another strategy, repeat procedure, etc. Code Tuning and Optimization 65

66 Speedup Ratio and Parallel Efficiency S is ratio of T 1 over T N, elapsed times of 1 and N workers. f is fraction of T 1 due to code sections not parallelizable. Amdahls Law above states that a code with its parallelizable component comprising 90% of total computation time can at best achieve a 10X speedup with lots of workers. A code that is 50% parallelizable speeds up two-fold with lots of workers. The parallel efficiency is E = S / N Program that scales linearly (S = N) has parallel efficiency 1. A task-parallel program is usually more efficient than a data- parallel program. Parallel codes can sometimes achieve super-linear behavior due to efficient cache usage per worker. Code Tuning and Optimization 66

67 Example of Speedup Ratio & Parallel Efficiency Code Tuning and Optimization 67


Download ppt "CODE TUNING AND OPTIMIZATION Kadin Tseng Boston University Scientific Computing and Visualization."

Similar presentations


Ads by Google