Presentation is loading. Please wait.

Presentation is loading. Please wait.

Using Criticality to Attack Performance Bottlenecks Brian Fields UC-Berkeley (Collaborators: Rastislav Bodik, Mark Hill, Chris Newburn)

Similar presentations


Presentation on theme: "Using Criticality to Attack Performance Bottlenecks Brian Fields UC-Berkeley (Collaborators: Rastislav Bodik, Mark Hill, Chris Newburn)"— Presentation transcript:

1 Using Criticality to Attack Performance Bottlenecks Brian Fields UC-Berkeley (Collaborators: Rastislav Bodik, Mark Hill, Chris Newburn)

2 Bottleneck Analysis Bottleneck Analysis: Determining the performance effect of an event on execution time An event could be: an instruction’s execution an instruction-window-full stall a branch mispredict a network request inter-processor communication etc.

3 Why is Bottleneck Analysis Important?

4 Bottleneck Analysis Applications Run-time Optimization Resource arbitration e.g., how to scheduling memory accesses? Effective speculation e.g., which branches to predicate? Dynamic reconfiguration e.g, when to enable hyperthreading? Energy efficiency e.g., when to throttle frequency? Design Decisions Overcoming technology constraints e.g., how to mitigate effect of long wire latencies? Programmer Performance Tuning Where have the cycles gone? e.g., which cache misses should be prefetched?

5 Why is Bottleneck Analysis Hard?

6 Current state-of-art Event counts: Exe. time = (CPU cycles + Mem. cycles) * Clock cycle time where: Mem. cycles = Number of cache misses * Miss penalty 1 (100 cycles) miss 1 (100 cycles) 2 (100 cycles) miss 2 (100 cycles) 2 misses but only 1 miss penalty

7 Parallelism in systems complicates performance understanding Parallelism A branch mispredict and full-store-buffer stall occur in the same cycle that three loads are waiting on the memory system and two floating- point multiplies are executing Two parallel cache misses Two parallel threads

8 Criticality Challenges Cost How much speedup possible from optimizing an event? Slack How much can an event be “slowed down” before increasing execution time? Interactions When do multiple events need to be optimized simultaneously? When do we have a choice? Exploit in Hardware

9 Our Approach

10 Our Approach: Criticality Critical events affect execution time, non-critical do not Bottleneck Analysis: Determining the performance effect of an event on execution time

11 Defining criticality Need Performance Sensitivity slowing down a “critical” event should slow down the entire program speeding up a “noncritical” event should leave execution time unchanged

12 Time123456789101112131415 R5 = 0FEC R3 = 0FEC R1 = #array + R3FEC R6 = ld[R1]FEC R3 = R3 + 1FEC R5 = R6 + R5FEC cmp R6, 0FEC bf L1FEC R5 = R5 + 100FEC R0 = R5FEC Ret R0FEC Standard Waterfall Diagram

13 Time123456789101112131415 R5 = 0FEC R3 = 0FEC R1 = #array + R3FEC R6 = ld[R1]FEC R3 = R3 + 1FEC R5 = R6 + R5FEC cmp R6, 0FEC bf L1FEC R5 = R5 + 100FEC R0 = R5FEC Ret R0FEC Annotated with Dependence Edges (MISP)

14 Time123456789101112131415 R5 = 0FEC R3 = 0FEC R1 = #array + R3FEC R6 = ld[R1]FEC R3 = R3 + 1FEC R5 = R6 + R5FEC cmp R6, 0FEC bf L1FEC R5 = R5 + 100FEC R0 = R5FEC Ret R0FEC Fetch BW ROB Data Dep Branch Misp. Annotated with Dependence Edges

15 Time123456789101112131415 R5 = 0FEC R3 = 0FEC R1 = #array + R3FEC R6 = ld[R1]FEC R3 = R3 + 1FEC R5 = R6 + R5FEC cmp R6, 0FEC bf L1FEC R5 = R5 + 100FEC R0 = R5FEC Ret R0FEC 1 1 1 1 1 1 3 1 1 2 1 0 1 Edge Weights Added

16 R5 = 0 R3 = 0 R1 = #array + R3 R6 = ld[R1] R3 = R3 + 1 R5 = R6 + R5 cmp R6, 0 bf L1 R5 = R5 + 100 R0 = R5 Ret R0 FECFECFECFEC FEC FECFECFECFECFECFEC 1 1 1 1 1 1 2 1 1 1 1 3 0 1 0 0 0 0 Convert to Graph 1 1 1 1 1 1 1 2 1 1 1 1 1 2 11 11 1 2 1 1

17 R5 = 0 R3 = 0 R1 = #array + R3 R6 = ld[R1] R3 = R3 + 1 R5 = R6 + R5 cmp R6, 0 bf L1 R5 = R5 + 100 R0 = R5 Ret R0 FECFECFECFEC FEC FECFECFECFECFECFEC 1 1 1 1 1 1 2 1 1 1 1 3 0 1 0 0 0 0 Convert to Graph 1 1 1 1 1 1 1 2 1 1 1 1 1 2 11 11 1 2 1 1

18 Smaller graph instance E 1 EEEE 3 FFFFF CCCCC 1 1 1 1 1 1 11 10 00 1 1 Non-critical, But how much slack? 1 Critical Icache miss, But how costly?

19 Add “hidden” constraints E 1 EEEE 11 1 1 2 3 FFFFF CCCCC 1 1 1 1 1 1 1 1 1 1 10 00 1 1 00 1 Non-critical, But how much slack? Critical Icache miss, But how costly?

20 Add “hidden” constraints E 1 EEEE 11 1 1 2 3 FFFFF CCCCC 1 1 1 1 1 1 1 1 1 1 10 00 1 1 00 1 Slack = 13 – 7 = 6 cycles Cost = 13 – 7 = 6 cycles

21 Slack “sharing” E 1 EEEE 11 1 1 2 3 FFFFF CCCCC 1 1 1 1 1 1 1 1 1 1 10 00 1 1 00 1 Slack = 6 cycles Can delay one edge by 6 cycles, but not both!

22 Machine Imbalance apportioned global ~80% insts have at least 5 cycles of apportioned slack

23 Criticality Challenges Cost How much speedup possible from optimizing an event? Slack How much can an event be “slowed down” before increasing execution time? Interactions When do multiple events need to be optimized simultaneously? When do we have a choice? Exploit in Hardware

24 Simple criticality not always enough Sometimes events have nearly equal criticality miss #1 (99) miss #2 (100) Want to know how critical is each event? how far from critical is each event? Actually, even that is not enough

25 Our solution: measure interactions Two parallel cache misses miss #1 (99) miss #2 (100) Cost(miss #1) = 0 Cost(miss #2) = 1 Cost({miss #1, miss #2}) = 100 Aggregate cost > Sum of individual costs  Parallel interaction 1000 + 1 icost = aggregate cost – sum of individual costs = 100 – 0 – 1 = 99

26 Interaction cost (icost) icost = aggregate cost – sum of individual costs 2. Zero icost ? 1. Positive icost  parallel interaction miss #1 miss #2

27 Interaction cost (icost) icost = aggregate cost – sum of individual costs miss #1 miss #2 1. Positive icost  parallel interaction 2. Zero icost  independent miss #1 miss #2... 3. Negative icost ?

28 Negative icost Two serial cache misses (data dependent) miss #1 (100)miss #2 (100) Cost(miss #1) = ? ALU latency (110 cycles)

29 Negative icost Two serial cache misses (data dependent) Cost(miss #1) = 90 Cost(miss #2) = 90 Cost({miss #1, miss #2}) = 90 ALU latency (110 cycles) miss #1 (100)miss #2 (100) icost = aggregate cost – sum of individual costs = 90 – 90 – 90 = -90 Negative icost  serial interaction

30 Interaction cost (icost) icost = aggregate cost – sum of individual costs miss #1 miss #2 1. Positive icost  parallel interaction 2. Zero icost  independent miss #1 miss #2... 3. Negative icost  serial interaction ALU latency miss #1 miss #2 Branch mispredict Fetch BW Load-Replay Trap LSQ stall

31 Why care about serial interactions? ALU latency (110 cycles) miss #1 (100)miss #2 (100) Reason #1 We are over-optimizing! Prefetching miss #2 doesn’t help if miss #1 is already prefetched (but the overhead still costs us) Reason #2 We have a choice of what to optimize Prefetching miss #2 has the same effect as miss #1

32 Icost Case Study: Deep pipelines Looking for serial interactions! Dcache (DL1) 1 4

33 Icost Breakdown (6 wide, 64-entry window) gccgzipvortex DL1 DL1+window DL1+bw DL1+bmisp DL1+dmiss DL1+alu DL1+imiss... Total

34 Icost Breakdown (6 wide, 64-entry window) gccgzipvortex DL130.5 % DL1+window DL1+bw DL1+bmisp DL1+dmiss DL1+alu DL1+imiss... Total

35 Icost Breakdown (6 wide, 64-entry window) gccgzipvortex DL130.5 % DL1+window-15.3 DL1+bw6.0 DL1+bmisp-3.4 DL1+dmiss-0.4 DL1+alu-8.2 DL1+imiss0.0... Total100.0

36 Icost Breakdown (6 wide, 64-entry window) gccgzipvortex DL118.3 %30.5 %25.8 % DL1+window-4.2-15.3-24.5 DL1+bw10.06.015.5 DL1+bmisp-7.0-3.4-0.3 DL1+dmiss-1.4-0.4-1.4 DL1+alu-1.6-8.2-4.7 DL1+imiss0.10.00.4... Total100.0

37 Icost Case Study: Deep pipelines EEEEE FFFFF CCCCC E F C 5 6 5 918767 5555 1 12 1 0 01010 14 2 1 i1i1 i2i2 i3i3 i4i4 i5i5 i6i6 4 4 DL1 access window edge

38 Icost Case Study: Deep pipelines EEEEE FFFFF CCCCC E F C 5 6 5 918767 5555 1 12 1 0 01010 14 2 1 i1i1 i2i2 i3i3 i4i4 i5i5 i6i6 4 4 DL1 access window edge

39 Icost Case Study: Deep pipelines EEEEE FFFFF CCCCC E F C 5 6 5 918767 5555 1 12 1 0 01010 14 2 1 i1i1 i2i2 i3i3 i4i4 i5i5 i6i6 4 4 DL1 access window edge

40 Icost Case Study: Deep pipelines EEEEE FFFFF CCCCC E F C 5 6 5 918767 5555 1 12 1 0 01010 14 2 1 i1i1 i2i2 i3i3 i4i4 i5i5 i6i6 4 4 DL1 access window edge

41 Icost Case Study: Deep pipelines EEEEE FFFFF CCCCC E F C 5 6 5 918767 5555 1 12 1 0 01010 14 2 1 i1i1 i2i2 i3i3 i4i4 i5i5 i6i6 4 4 DL1 access window edge

42 Icost Case Study: Deep pipelines EEEEE FFFFF CCCCC E F C 5 6 5 918767 5555 1 12 1 0 01010 14 2 1 i1i1 i2i2 i3i3 i4i4 i5i5 i6i6 4 4 DL1 access window edge

43 Criticality Challenges Cost How much speedup possible from optimizing an event? Slack How much can an event be “slowed down” before increasing execution time? Interactions When do multiple events need to be optimized simultaneously? When do we have a choice? Exploit in Hardware

44 Criticality Analyzer Online, fast-feedback Limited to critical/not critical Replacement for Performance Counters Requires offline analysis Constructs entire graph

45 Only last-arriving edges can be critical Observation: R1 R2 + R3 If dependence into R2 is on critical path, then value of R2 arrived last. critical  arrives last arrives last  critical E R2 R3 Dependence resolved early 

46 Determining last-arrive edges Observe events within the machine last_arrive[F] = last_arrive[E] = E F CC E F CC F  E if data ready on fetch E F CC E F CC E F CC E  E observe arrival order of operands E F CC E F C C last_arrive[C] = E  C if commit pointer is delayed C  C otherwise E F C C E F C C E F CC E F CC E F CC E F CC E  F if branch misp. E F CC E F CC E F C C E F C C C  F if ROB stallF  F otherwise

47 Last-arrive edges The last-arrive rule  CP consists only of “last-arrive” edges F E C

48 Prune the graph Only need to put last-arrive edges in graph No other edges could be on CP F E C newest

49 …and we’ve found the critical path! Backward propagate along last-arrive edges newest F E C  Found CP by only observing last-arrive edges  but still requires constructing entire graph

50 Step 2. Reducing storage reqs CP is a ”long” chain of last-arrive edges.  the longer a given chain of last-arrive edges, the more likely it is part of the CP Algorithm: find sufficiently long last-arrive chains 1. Plant token into a node n 2. Propagate forward, only along last-arrive edges 3. Check for token after several hundred cycles 4. If token alive, n is assumed critical

51 Online Criticality Detection Forward propagate token newest F E C Plant Token

52 Online Criticality Detection Forward propagate token newest F E C Plant Token Tokens “Die”

53 Online Criticality Detection Forward propagate token F E C Plant Token Token survives!

54 Putting it all together CP prediction table Last-arrive edges (producer  retired instr) OOO Core E-critical? Training Path PC Prediction Path Token-Passing Analyzer

55 Results Performance (Speed) Scheduling in clustered machines 10% speedup Selective value prediction Deferred scheduling (Crowe, et al) 11% speedup Heterogeneous cache (Rakvic, et al.) 17% speedup Energy Non-uniform machine: fast and slow pipelines ~25% less energy Instruction queue resizing (Sasanka, et al.) Multiple frequency scaling (Semeraro, et al.) 19% less energy with 3% less performance Selective pre-execution (Petric, et al.)

56 Exploit in Hardware Criticality Analyzer Online, fast-feedback Limited to critical/not critical Replacement for Performance Counters Requires offline analysis Constructs entire graph

57 Profiling goal Goal: Construct graph many dynamic instructions Constraint: Can only sample sparsely

58 Profiling goal Goal: Construct graph Constraint: Can only sample sparsely DNA DNA strand Genome sequencing

59 “Shotgun” genome sequencing DNA

60 “Shotgun” genome sequencing DNA

61 “Shotgun” genome sequencing... DNA

62 “Shotgun” genome sequencing... Find overlaps among samples DNA

63 Mapping “shotgun” to our situation many dynamic instructions Icache miss Dcache miss Branch misp. No event

64 ... Profiler hardware requirements

65 ... Profiler hardware requirements Match!

66 Sources of error Error Source GccParserTwolf Modeling execution as a graph 2.1 %6.0%0.1 % Errors in graph construction 5.3 %1.5 %1.6 % Sampling only a few graph fragments 4.8 %6.5 %7.2 % Total12.2 %14.0 %8.9 %

67 Conclusion: Grand Challenges Cost How much speedup possible from optimizing an event? Slack How much can an event be “slowed down” before increasing execution time? Interactions When do multiple events need to be optimized simultaneously? When do we have a choice? modeling token-passing analyzer parallel interactions serial interactions shotgun profiling

68 Conclusion: Bottleneck Analysis Applications Run-time Optimization Effective speculation Resource arbitration Dynamic reconfiguration Energy efficiency Design Decisions Overcoming technology constraints Programmer Performance Tuning Where have the cycles gone? Selective value prediction Scheduling and steering in clustered processors Resize instruction windowNon-uniform machinesHelped cope with high- latency dcache Measured cost of cache misses/branch mispredicts

69 Outline Simple Criticality Definition (ISCA ’01) Detection (ISCA ’01) Application (ISCA ’01-’02) Advanced Criticality Interpretation (MICRO ’03) What types of interactions are possible? Hardware Support (MICRO ’03, TACO ’04) Enhancement to performance counters

70 Simple criticality not always enough Sometimes events have nearly equal criticality miss #1 (99) miss #2 (100) Want to know how critical is each event? how far from critical is each event? Actually, even that is not enough

71 Our solution: measure interactions Two parallel cache misses miss #1 (99) miss #2 (100) Cost(miss #1) = 0 Cost(miss #2) = 1 Cost({miss #1, miss #2}) = 100 Aggregate cost > Sum of individual costs  Parallel interaction 1000 + 1 icost = aggregate cost – sum of individual costs = 100 – 0 – 1 = 99

72 Interaction cost (icost) icost = aggregate cost – sum of individual costs 2. Zero icost ? 1. Positive icost  parallel interaction miss #1 miss #2

73 Interaction cost (icost) icost = aggregate cost – sum of individual costs miss #1 miss #2 1. Positive icost  parallel interaction 2. Zero icost  independent miss #1 miss #2... 3. Negative icost ?

74 Negative icost Two serial cache misses (data dependent) miss #1 (100)miss #2 (100) Cost(miss #1) = ? ALU latency (110 cycles)

75 Negative icost Two serial cache misses (data dependent) Cost(miss #1) = 90 Cost(miss #2) = 90 Cost({miss #1, miss #2}) = 90 ALU latency (110 cycles) miss #1 (100)miss #2 (100) icost = aggregate cost – sum of individual costs = 90 – 90 – 90 = -90 Negative icost  serial interaction

76 Interaction cost (icost) icost = aggregate cost – sum of individual costs miss #1 miss #2 1. Positive icost  parallel interaction 2. Zero icost  independent miss #1 miss #2... 3. Negative icost  serial interaction ALU latency miss #1 miss #2 Branch mispredict Fetch BW Load-Replay Trap LSQ stall

77 Why care about serial interactions? ALU latency (110 cycles) miss #1 (100)miss #2 (100) Reason #1 We are over-optimizing! Prefetching miss #2 doesn’t help if miss #1 is already prefetched (but the overhead still costs us) Reason #2 We have a choice of what to optimize Prefetching miss #2 has the same effect as miss #1

78 Outline Simple Criticality Definition (ISCA ’01) Detection (ISCA ’01) Application (ISCA ’01-’02) Advanced Criticality Interpretation (MICRO ’03) What types of interactions are possible? Hardware Support (MICRO ’03, TACO ’04) Enhancement to performance counters

79 Profiling goal Goal: Construct graph many dynamic instructions Constraint: Can only sample sparsely

80 Profiling goal Goal: Construct graph Constraint: Can only sample sparsely DNA DNA strand Genome sequencing

81 “Shotgun” genome sequencing DNA

82 “Shotgun” genome sequencing DNA

83 “Shotgun” genome sequencing... DNA

84 “Shotgun” genome sequencing... Find overlaps among samples DNA

85 Mapping “shotgun” to our situation many dynamic instructions Icache miss Dcache miss Branch misp. No event

86 ... Profiler hardware requirements

87 ... Profiler hardware requirements Match!

88 Sources of error Error Source GccParserTwolf

89 Sources of error Error Source GccParserTwolf Modeling execution as a graph 2.1 %6.0%0.1 %

90 Sources of error Error Source GccParserTwolf Modeling execution as a graph 2.1 %6.0%0.1 % Errors in graph construction 5.3 %1.5 %1.6 %

91 Sources of error Error Source GccParserTwolf Modeling execution as a graph 2.1 %6.0%0.1 % Errors in graph construction 5.3 %1.5 %1.6 % Sampling only a few graph fragments 4.8 %6.5 %7.2 % Total12.2 %14.0 %8.9 %

92 Conclusion: Bottleneck Analysis Applications Run-time Optimization Effective speculation Resource arbitration Dynamic reconfiguration Energy efficiency Design Decisions Overcoming technology constraints Programmer Performance Tuning Where have the cycles gone? Selective value prediction Scheduling and steering in clustered processors Resize instruction windowNon-uniform machinesHelped cope with high- latency dcache Measured cost of cache misses/branch mispredicts

93 Conclusion: Grand Challenges Cost How much speedup possible from optimizing an event? Slack How much can an event be “slowed down” before increasing execution time? Interactions When do multiple events need to be optimized simultaneously? When do we have a choice? modeling token-passing analyzer parallel interactions serial interactions shotgun profiling

94 Backup Slides

95 Related Work

96 Criticality Prior Work Critical-Path Method, PERT charts Developed for Navy’s “Polaris” project-1957 Used as a project management tool Simple critical-path, slack concepts “Attribution” Heuristics Rosenblum et al.: SOSP-1995, and many others Marks instruction at head of ROB as critical, etc. Empirically, has limited accuracy Does not account for interactions between events

97 Related Work: Microprocessor Criticality Latency tolerance analysis Srinivasan and Lebeck: MICRO-1998 Heuristics-driven criticality predictors Tune et al.: HPCA-2001 Srinivasan et al.: ISCA-2001 “Local” slack detector Casmira and Grunwald: Kool Chips Workshop-2000 ProfileMe with pair-wise sampling Dean, et al.: MICRO-1997

98 Unresolved Issues

99 Alternative I: Addressing Unresolved Issues Modeling and Measurement What resources can we model effectively? difficulty with mutual-exclusion-type resouces (ALUs) Efficient algorithms Release tool for measuring cost/slack Hardware Detailed design for criticality analyzer Shotgun profiler simplifications gradual path from counters Optimization explore heuristics for exploiting interactions

100 Alternative II: Chip-Multiprocessors Design Decisions Should each core support out-of-order execution? Should SMT be supported? How many processors are useful? What is the effect of inter-processor latency? Programmer Performance Tuning Parallelizing applications What makes a good division into threads? How can we find them automatically, or at least help programmers to find them?

101 Unresolved issues Modeling and Measurement What resources can we model effectively? difficulty with mutual-exclusion-type resouces (ALUs) In other words, unanticipated side effects 1 1 1. ld r2, [Mem] 2. add r3  r2 + 1 3. ld r4, [Mem] 4. add r6  r4 + 1 (cache miss) F E C F E C F E C F E C 10 1 0 1 111 00 000 Original Execution (cache miss) (cache hit) No contention 1. ld r2, [Mem] 2. add r3  r2 + 1 3. ld r4, [Mem] 4. add r6  r4 + 1 F E C F E C F E C F E C 102 1 0 112 1111 00 000 Altered Execution (to compute cost of inst #3 cache miss) Adder contention Contention edge Incorrect critical path due to contention edge Should not be here

102 Unresolved issues Modeling and Measurement (cont.) How should processor policies be modeled? relationship to icost definition Efficient algorithms for measuring icosts pairs of events, etc. Release tool for measuring cost/slack

103 Unresolved issues Hardware Detailed design for criticality analyzer help to convince industry-types to build it Shotgun profiler simplifications gradual path from counters Optimization Explore icost optimization heuristics icosts are difficult to interpret

104 Validation

105 Validation: can we trust our model? Run two simulations : Reduce CP latencies Reduce non-CP latencies  Expect “big” speedup  Expect no speedup

106 Validation: can we trust our model?

107 Validation Two steps: 1. Increase latencies of insts. by their apportioned slack for three apportioning strategies: 1) latency+1, 2) 5-cycles to as many instructions as possible, 3) 12-cycles to as many loads as possible 2. Compare to baseline (no delays inserted)

108 Validation Worst case: Inaccuracy of 0.6%

109 Slack Measurements

110 Three slack variants Local slack: # cycles latency can be increased without delaying any subsequent instructions Global slack: # cycles latency can be increased without delaying the last instruction in the program Apportioned slack: Distribute global slack among instructions using an apportioning strategy

111 Slack measurements ~21% insts have at least 5 cycles of local slack local

112 Slack measurements ~90% insts have at least 5 cycles of global slack local global

113 Slack measurements ~80% insts have at least 5 cycles of apportioned slack local apportioned global A large amount of exploitable slack exists

114 Application-centered Slack Measurements

115 Load slack Can we tolerate a long-latency L1 hit? design: wire-constrained machine, e.g. Grid non-uniformity: multi-latency L1 apportioning strategy: apportion ALL slack to load instructions

116 Apportion all slack to loads Most loads can tolerate an L2 cache hit

117 Multi-speed ALUs Can we tolerate ALUs running at half frequency? design: fast/slow ALUs non-uniformity: multi-latency execution latency, bypass apportioning strategy: give slack equal to original latency + 1

118 Latency+1 apportioning Most instructions can tolerate doubling their latency

119 Slack Locality and Prediction

120 Predicting slack Two steps to PC-indexed, history-based prediction: 1. Measure slack of a dynamic instruction 2. Store in array indexed by PC of static instruction Two requirements: 1. Locality of slack 2. Ability to measure slack of a dynamic instruction

121 Locality of slack

122

123 PC-indexed, history-based predictor can capture most of the available slack

124 Slack Detector Problem #2 Determining if overall execution time increased Solution Check if delay made instruction critical delay and observe effective for hardware predictor Problem #1 Iterating repeatedly over same dynamic instruction Solution Only sample dynamic instruction once

125 Slack Detector Goal: Determine whether instruction has n cycles of slack 1. Delay the instruction by n cycles 2. Check if critical (via critical-path analyzer) 3. No, instruction has n cycles of slack 4. Yes, instruction does not have n cycles of slack delay and observe

126 Slack Application

127 Fast/slow cluster microarchitecture Data Cache WIN Reg WIN Reg Fast, 3-wide cluster Slow, 3-wide cluster ALUs Fetch + Rename Aggressive non-uniform design: Higher execution latencies Increased (cross-domain) bypass latency Decreased effective issue bandwidth Steer Bypass Bus P  F 2 save ~37% core power

128 Picking bins for the slack predictor Use implicit slack predictor with four bins: 1. Steer to fast cluster + schedule with high priority 2. Steer to fast cluster + schedule with low priority 3. Steer to slow cluster + schedule with high priority 4. Steer to slow cluster + schedule with low priority Two decisions 1.Steer to fast/slow cluster 2.Schedule with high/low priority within a cluster

129 Slack-based policies 2 fast, high-power clusters slack-based policy reg-dep steering 10% better performance from hiding non-uniformities

130 CMP case study

131 Multithreaded Execution Case Study Two questions: How should a program be divided into threads? what makes a good cutpoint? how can we find them automatically, or at least help programmers find them? What should a multiple-core design look like? should each core support out-of-order execution? should SMT be supported? how many processors are useful? what is the effect of inter-processor latency?

132 Parallelizing an application Why parallelize a single-thread application? Legacy code, large code bases Difficult to parallelize apps Interpreted code, kernels of operating systems Like to use better programming languages Scheme, Java instead of C/C++

133 Parallelizing an application Simplifying assumption Program binary unchanged Simplified problem statement Given a program of length L, find a cutpoint that divides the program into two threads that provides maximum speedup Must consider: data dependences, execution latencies, control dependences, proper load balancing

134 Parallelizing an application Naive solution: try every possible cutpoint Our solution: efficiently determine the effect of every possible cutpoint model execution before and after every cut

135 Solution last instruction F E C first instruction 010 1 010 1 3 2 1 01 2 1 1 4 0 0 2 1 1 1 2 010 2 1 1 4 1 1 2 1 1 2 3 1 0 0 0 0 start

136 Parallelizing an application Considerations: Synchronization overhead add latency to EE edges Synchronization may involve turning EE to EF Scheduling of threads additional CF edges Challenges: State behavior (one thread to multiple processors) caches, branch predictor Control behavior limits where cutpoints can be made

137 Parallelizing an application More general problem: Divide a program into N threads NP-complete Icost can help: icost(p1,p2) << 0 implies p1 and p2 redundant action: move p1 and p2 further apart

138 Preliminary Results Experimental Setup Simulator, based loosely on SimpleScalar Alpha SpecInt binaries Procedure 1. Assume execution trace is known 2. Look at each 1k run of instructions 3. Test every possible cutpoint using 1k graphs

139 Dynamic Cutpoints Only 20% of cuts yield benefits of > 20 cycles

140 Usefulness of cost-based policy

141 Static Cutpoints Up to 60% of cuts yield benefits of > 20 cycles

142 Future Avenues of Research Map cutpoints back to actual code Compare automatically generated cutpoints to human- generated ones See what performance gains are in a simulator, as opposed to just on the graph Look at the effect of synchronization operations What additional overhead do they introduce? Deal with state, control problems Might need some technique outside of the graph

143 Multithreaded Execution Case Study Two possible questions: How should a program be divided into threads? what makes a good cutpoint? how can we find them automatically, or at least help programmers find them? What should a multiple-core design look like? should each core support out-of-order execution? should SMT be supported? how many processors are useful? what is the effect of inter-processor latency?

144 CMP design study What we can do: Try out many configurations quickly dramatic changes in architecture often only small changes in graph Identifying bottlenecks especially interactions

145 CMP design study: Out-of-orderness Is out-of-order execution necessary in a CMP? Procedure model execution with different configurations adjust CD edges compute breakdowns notice resource/events interacting with CD edges

146 CMP design study: Out-of-orderness last instruction F E C first instruction 010 1 010 1 3 2 1 01 2 1 1 4 0 0 2 1 1 1 2 010 2 1 1 4 1 1 2 1 1 2 3 1 0 0 0 0

147 CMP design study: Out-of-orderness Results summary Single-core: Performance taps out at 256 entries CMP: Performance gains up through 1024 entries some benchmarks see gains up to 16k entries Why more beneficial? Use breakdowns to find out.....

148 CMP design study: Out-of-orderness Components of window cost cache misses holding up retirement? long strands of data dependencies? predictable control flow? Icost breakdowns give quantitative and qualitative answers

149 CMP design study: Out-of-orderness cost(window) + icost(window, A) + icost(window, B) + icost(window, AB) = 0 window cost 100% 0% ALU cache misses Independent ALU cache misses interaction Parallel Interaction ALU cache misses interaction Serial Interaction equal

150 Summary of Preliminary Results icost(window, ALU operations) << 0 primarily communication between processors window often stalled waiting for data Implications larger window may be overkill need a cheap non-blocking solution e.g., continual-flow pipelines

151 CMP design study: SMT? Benefits reduced thread start-up latency reduced communication costs How we could help distribution of thread lengths breakdowns to understand effect of communication

152 #1 #2 #1 Start #1 #2 CMP design study: How many processors?

153 CMP design study: Other Questions What is the effect of inter-processor communication latency? understand hidden vs. exposed communication Allocating processors to programs methodology for O/S to better assign programs to processors

154 Waterfall To Graph Story

155 Time123456789101112131415 R5 = 0FEC R3 = 0FEC R1 = #array + R3FEC R6 = ld[R1]FEC R3 = R3 + 1FEC R5 = R6 + R5FEC cmp R6, 0FEC bf L1FEC R5 = R5 + 100FEC R0 = R5FEC Ret R0FEC Standard Waterfall Diagram

156 Time123456789101112131415 R5 = 0FEC R3 = 0FEC R1 = #array + R3FEC R6 = ld[R1]FEC R3 = R3 + 1FEC R5 = R6 + R5FEC cmp R6, 0FEC bf L1FEC R5 = R5 + 100FEC R0 = R5FEC Ret R0FEC Annotated with Dependence Edges

157 Time123456789101112131415 R5 = 0FEC R3 = 0FEC R1 = #array + R3FEC R6 = ld[R1]FEC R3 = R3 + 1FEC R5 = R6 + R5FEC cmp R6, 0FEC bf L1FEC R5 = R5 + 100FEC R0 = R5FEC Ret R0FEC Fetch BW Data Dep ROB Branch Misp. Annotated with Dependence Edges

158 Time123456789101112131415 R5 = 0FEC R3 = 0FEC R1 = #array + R3FEC R6 = ld[R1]FEC R3 = R3 + 1FEC R5 = R6 + R5FEC cmp R6, 0FEC bf L1FEC R5 = R5 + 100FEC R0 = R5FEC Ret R0FEC 1 1 1 1 1 1 3 1 1 2 1 0 1 Edge Weights Added

159 R5 = 0 R3 = 0 R1 = #array + R3 R6 = ld[R1] R3 = R3 + 1 R5 = R6 + R5 cmp R6, 0 bf L1 R5 = R5 + 100 R0 = R5 Ret R0 FECFECFECFEC FEC FECFECFECFECFECFEC 1 1 1 1 1 1 2 1 1 1 1 3 0 1 1 2 11 11 1 1 1 1 1 1 1 1 2 2 0 0 0 0 Convert to Graph

160 R5 = 0 R3 = 0 R1 = #array + R3 R6 = ld[R1] R3 = R3 + 1 R5 = R6 + R5 cmp R6, 0 bf L1 R5 = R5 + 100 R0 = R5 Ret R0 FECFECFECFEC FEC FECFECFECFECFECFEC 1 1 1 1 1 1 2 1 1 1 1 3 0 1 1 2 11 11 1 1 1 1 1 1 1 1 2 2 0 0 0 0 Find Critical Path

161 R5 = 0 R3 = 0 R1 = #array + R3 R6 = ld[R1] R3 = R3 + 1 R5 = R6 + R5 cmp R6, 0 bf L1 R5 = R5 + 100 R0 = R5 Ret R0 Add Non-last-arriving Edges

162 R5 = 0 R3 = 0 R1 = #array + R3 R6 = ld[R1] R3 = R3 + 1 R5 = R6 + R5 cmp R6, 0 bf L1 R5 = R5 + 100 R0 = R5 Ret R0 Branch misprediction made correct Graph Alterations

163 Token-passing analyzer

164 Step 1. Observing Observation: R1 R2 + R3 If dependence into R2 is on critical path, then value of R2 arrived last. critical  arrives last arrives last  critical E R2 R3 Dependence resolved early 

165 Determining last-arrive edges Observe events within the machine last_arrive[F] = last_arrive[E] = E F CC E F CC F  E if data ready on fetch E F CC E F CC E F CC E  E observe arrival order of operands E F CC E F C C last_arrive[C] = E  C if commit pointer is delayed C  C otherwise E F C C E F C C E F CC E F CC E F CC E F CC E  F if branch misp. E F CC E F CC E F C C E F C C C  F if ROB stallF  F otherwise

166 Last-arrive edges: a CPU stethoscope CPU E  C E  E F  E C  F F  F E  F C  C

167 Last-arrive edges F E C 01 0 1010 1 3 2 1 01 2 1 1 4 0 0 2 1 1 1 2 010 2 1 1 4 1 1 2 1 1 2 3 1 0 000

168 Remove latencies F E C Do not need explicit weights

169 Last-arrive edges The last-arrive rule  CP consists only of “last-arrive” edges F E C

170 Prune the graph Only need to put last-arrive edges in graph No other edges could be on CP F E C newest

171 …and we’ve found the critical path! Backward propagate along last-arrive edges newest F E C  Found CP by only observing last-arrive edges  but still requires constructing entire graph

172 Step 2. Efficient analysis CP is a ”long” chain of last-arrive edges.  the longer a given chain of last-arrive edges, the more likely it is part of the CP Algorithm: find sufficiently long last-arrive chains 1. Plant token into a node n 2. Propagate forward, only along last-arrive edges 3. Check for token after several hundred cycles 4. If token alive, n is assumed critical

173 1. plant token Token-passing example 2. propagate token 3. is token alive? 4. yes, train critical Critical  Found CP without constructing entire graph ROB Size

174 Implementation: a small SRAM array Last-arrive producer node (inst id, type) Token Queue Read Write Commited (inst id, type) Size of SRAM: 3 bits  ROB size < 200 Bytes … Simply replicate for additional tokens

175 Putting it all together CP prediction table Last-arrive edges (producer  retired instr) OOO Core E-critical? Training Path PC Prediction Path Token-Passing Analyzer

176 Scheduling and Steering

177 Case Study #1: Clustered architectures steering issue window scheduling 1.Current state of art (Base) 2.Base + CP Scheduling 3.Base + CP Scheduling + CP Steering

178 unclustered 2 cluster 4 cluster Current State of the Art  Avg. clustering penalty for 4 clusters: 19% Constant issue width, clock frequency

179 unclustered 2 cluster 4 cluster CP Optimizations Base + CP Scheduling

180 unclustered 2 cluster 4 cluster CP Optimizations  Avg. clustering penalty reduced from 19% to 6% Base + CP Scheduling + CP Steering

181 Token-passing Vs. Heuristics

182 Local Vs. Global Analysis oldest-uncommited oldest-unissued token-passing Previous CP predictors: local resource-sensitive predictions (HPCA 01, ISCA 01)  CP exploitation seems to require global analysis

183 Icost case study

184 Icost Case Study: Deep pipelines Deep pipelines cause long latency loops: level-one (DL1) cache access, issue-wakeup, branch misprediction, … But can often mitigate them indirectly Assume 4-cycle DL1 access; how to mitigate? Increase cache ports? Increase window size? Increase fetch BW? Reduce cache misses? Really, looking for serial interactions!

185 Icost Case Study: Deep pipelines EEEEE FFFFF CCCCC E F C 5 6 5 918767 5555 1 12 1 0 01010 14 2 1 i1i1 i2i2 i3i3 i4i4 i5i5 i6i6 4 4 DL1 access window edge

186 Icost Case Study: Deep pipelines EEEEE FFFFF CCCCC E F C 5 6 5 918767 5555 1 12 1 0 01010 14 2 1 i1i1 i2i2 i3i3 i4i4 i5i5 i6i6 4 4 DL1 access window edge

187 Icost Case Study: Deep pipelines EEEEE FFFFF CCCCC E F C 5 6 5 918767 5555 1 12 1 0 01010 14 2 1 i1i1 i2i2 i3i3 i4i4 i5i5 i6i6 4 4 DL1 access window edge

188 Icost Case Study: Deep pipelines EEEEE FFFFF CCCCC E F C 5 6 5 918767 5555 1 12 1 0 01010 14 2 1 i1i1 i2i2 i3i3 i4i4 i5i5 i6i6 4 4 DL1 access window edge

189 Icost Case Study: Deep pipelines EEEEE FFFFF CCCCC E F C 5 6 5 918767 5555 1 12 1 0 01010 14 2 1 i1i1 i2i2 i3i3 i4i4 i5i5 i6i6 4 4 DL1 access window edge

190 Icost Case Study: Deep pipelines EEEEE FFFFF CCCCC E F C 5 6 5 918767 5555 1 12 1 0 01010 14 2 1 i1i1 i2i2 i3i3 i4i4 i5i5 i6i6 4 4 DL1 access window edge

191 Icost Case Study: Deep pipelines EEEEE FFFFF CCCCC E F C 5 6 5 918767 5555 1 12 1 0 01010 14 2 1 i1i1 i2i2 i3i3 i4i4 i5i5 i6i6 4 4 DL1 access window edge

192 Icost Breakdown (6 wide, 64-entry window) gccgzipvortex DL1 DL1+window DL1+bw DL1+bmisp DL1+dmiss DL1+alu DL1+imiss... Total

193 Icost Breakdown (6 wide, 64-entry window) gccgzipvortex DL130.5 % DL1+window DL1+bw DL1+bmisp DL1+dmiss DL1+alu DL1+imiss... Total

194 Icost Breakdown (6 wide, 64-entry window) gccgzipvortex DL130.5 % DL1+window-15.3 DL1+bw6.0 DL1+bmisp-3.4 DL1+dmiss-0.4 DL1+alu-8.2 DL1+imiss0.0... Total100.0

195 Icost Breakdown (6 wide, 64-entry window) gccgzipvortex DL118.3 %30.5 %25.8 % DL1+window-4.2-15.3-24.5 DL1+bw10.06.015.5 DL1+bmisp-7.0-3.4-0.3 DL1+dmiss-1.4-0.4-1.4 DL1+alu-1.6-8.2-4.7 DL1+imiss0.10.00.4... Total100.0

196 Vortex Breakdowns, enlarging the window 64128256 DL1 DL1+window DL1+bw DL1+bmisp DL1+dmiss DL1+alu DL1+imiss... Total

197 Vortex Breakdowns, enlarging the window 64128256 DL125.88.93.9 DL1+window-24.5-7.7-2.6 DL1+bw15.516.713.2 DL1+bmisp-0.3-0.6-0.8 DL1+dmiss-1.4-2.1-2.8 DL1+alu-4.7-2.5-0.4 DL1+imiss0.40.50.3... Total100.080.875.0

198 Shotgun Profiling

199 Profiling goal Goal: Construct graph many dynamic instructions Constraint: Can only sample sparsely

200 Profiling goal Goal: Construct graph Constraint: Can only sample sparsely DNA DNA strand Genome sequencing

201 “Shotgun” genome sequencing DNA

202 “Shotgun” genome sequencing DNA

203 “Shotgun” genome sequencing... DNA

204 “Shotgun” genome sequencing... Find overlaps among samples DNA

205 Mapping “shotgun” to our situation many dynamic instructions Icache miss Dcache miss Branch misp. No event

206 ... Profiler hardware requirements

207 ... Profiler hardware requirements Match!

208 Offline Profiler Algorithm long sample detailed samples

209 = then = if Design issues  Identify microexecution context Choosing signature bits Determining PCs (for better detailed sample matching) long sample Start PC 12 1620245660... branch encode taken/not-taken bit in signature

210 Sources of error Error Source GccParserTwolf

211 Sources of error Error Source GccParserTwolf Building graph fragments

212 Sources of error Error Source GccParserTwolf Building graph fragments Sampling only a few graph fragments

213 Sources of error Error Source GccParserTwolf Building graph fragments Sampling only a few graph fragments Modeling execution as a graph

214 Sources of error Error Source GccParserTwolf Building graph fragments 5.3 %1.5 %1.6 % Sampling only a few graph fragments Modeling execution as a graph

215 Sources of error Error Source GccParserTwolf Building graph fragments 5.3 %1.5 %1.6 % Sampling only a few graph fragments 4.8 %6.5 %7.2 % Modeling execution as a graph

216 Sources of error Error Source GccParserTwolf Building graph fragments 5.3 %1.5 %1.6 % Sampling only a few graph fragments 4.8 %6.5 %7.2 % Modeling execution as a graph 2.1 %6.0%0.1 %

217 Sources of error Error Source GccParserTwolf Building graph fragments 5.3 %1.5 %1.6 % Sampling only a few graph fragments 4.8 %6.5 %7.2 % Modeling execution as a graph 2.1 %6.0%0.1 % Total12.2 %14.0 %8.9 %

218 Icost vs. Sensitivity Study

219 Compare Icost and Sensitivity Study Corollary to DL1 and ROB serial interaction: As load latency increases, the benefit from enlarging the ROB increases. EEEEE FFFFF CCCCC E F C 1 2 1 12323 1111 0 1 0 1 1 01010 2 2 1 i1i1 i2i2 i3i3 i4i4 i5i5 i6i6 4 3 DL1 access

220 Compare Icost and Sensitivity Study

221 Sensitivity Study Advantages More information e.g., concave or convex curves Interaction Cost Advantages Easy (automatic) interpretation Sign and magnitude have well defined meanings Concise communication DL1 and ROB interact serially

222 Outline Definition (ISCA ’01) what does it mean for an event to be critical? Detection (ISCA ’01) how can we determine what events are critical? Interpretation (MICRO ’04, TACO ’04) what does it mean for two events to interact? Application (ISCA ’01-’02, TACO ’04) how can we exploit criticality in hardware?

223 Our solution: measure interactions Two parallel cache misses (Each 100 cycles) miss #1 (100) miss #2 (100) Cost(miss #1) = 0 Cost(miss #2) = 0 Cost({miss #1, miss #2}) = 100 Aggregate cost > Sum of individual costs  Parallel interaction 1000 + 0 icost = aggregate cost – sum of individual costs = 100 – 0 – 0 = 100

224 Interaction cost (icost) icost = aggregate cost – sum of individual costs 2. Zero icost ? 1. Positive icost  parallel interaction miss #1 miss #2

225 Interaction cost (icost) icost = aggregate cost – sum of individual costs miss #1 miss #2 1. Positive icost  parallel interaction 2. Zero icost  independent miss #1 miss #2... 3. Negative icost ?

226 Negative icost Two serial cache misses (data dependent) miss #1 (100)miss #2 (100) Cost(miss #1) = ? ALU latency (110 cycles)

227 Negative icost Two serial cache misses (data dependent) Cost(miss #1) = 90 Cost(miss #2) = 90 Cost({miss #1, miss #2}) = 90 ALU latency (110 cycles) miss #1 (100)miss #2 (100) icost = aggregate cost – sum of individual costs = 90 – 90 – 90 = -90 Negative icost  serial interaction

228 Interaction cost (icost) icost = aggregate cost – sum of individual costs miss #1 miss #2 1. Positive icost  parallel interaction 2. Zero icost  independent miss #1 miss #2... 3. Negative icost  serial interaction ALU latency miss #1 miss #2 Branch mispredict Fetch BW Load-Replay Trap LSQ stall

229 Why care about serial interactions? ALU latency (110 cycles) miss #1 (100)miss #2 (100) Reason #1 We are over-optimizing! Prefetching miss #2 doesn’t help if miss #1 is already prefetched (but the overhead still costs us) Reason #2 We have a choice of what to optimize Prefetching miss #2 has the same effect as miss #1

230 Outline Definition (ISCA ’01) what does it mean for an event to be critical? Detection (ISCA ’01) how can we determine what events are critical? Interpretation (MICRO ’04, TACO ’04) what does it mean for two events to interact? Application (ISCA ’01-’02, TACO ’04) how can we exploit criticality in hardware?

231 Criticality Analyzer (ISCA ‘01) Procedure 1. Observe last-arriving edges uses simple rules 2. Propagate a token forward along last-arriving edges at worst, a read-modify-write sequence to a small array 3. If token dies, non-critical; otherwise, critical Goal Detect criticality of dynamic instructions

232 Slack Analyzer (ISCA ‘02) Goal Detect likely slack of static instructions Procedure 1. Delay the instruction by n cycles 2. Check if critical (via critical-path analyzer) No, instruction has n cycles of slack Yes, instruction does not have n cycles of slack

233 Shotgun Profiling (TACO ‘04) Goal Create representative graph fragments Procedure Enhance ProfileMe counters with context Use context to piece together counter samples


Download ppt "Using Criticality to Attack Performance Bottlenecks Brian Fields UC-Berkeley (Collaborators: Rastislav Bodik, Mark Hill, Chris Newburn)"

Similar presentations


Ads by Google