Presentation is loading. Please wait.

Presentation is loading. Please wait.

CPUs CPU performance CPU power consumption Elements of CPU performance Cycle time CPU pipeline Memory system.

Similar presentations

Presentation on theme: "CPUs CPU performance CPU power consumption Elements of CPU performance Cycle time CPU pipeline Memory system."— Presentation transcript:


2 CPUs CPU performance CPU power consumption

3 Elements of CPU performance Cycle time CPU pipeline Memory system

4 Pipelining Several instructions are executed simultaneously at different stages of completion Various conditions can cause pipeline bubbles that reduce utilization: branches memory system delays etc.

5 Pipeline structures ARM7 has 3-stage pipes: fetch instruction from memory decode opcode and operands execute ARM9 have 5-stage pipes: Instruction fetch Decode Execute Data memory access Register write

6 ARM9 core instruction pipeline

7 ARM7 pipeline execution add r0,r1,#5 sub r2,r3,r6 cmp r2,#3 fetch time decode fetch execute decode fetch execute decode execute 123

8 Performance measures Latency: time it takes for an instruction to get through the pipeline Throughput: number of instructions executed per time period Pipelining increases throughput without reducing latency

9 Pipeline stalls If every step cannot be completed in the same amount of time, pipeline stalls Bubbles introduced by stall increase latency, reduce throughput

10 ARM multi-cycle LDMIA instruction fetchdecode ex ld r2 ldmia r0,{r2,r3} sub r2,r3,r6 cmp r2,#3 ex ld r3 fetch time decode ex sub fetchdecode ex cmp

11 Control stalls Branches often introduce stalls (branch penalty) Stall time may depend on whether branch is taken May have to squash instructions that already started executing Don ’ t know what to fetch until condition is evaluated

12 ARM pipelined branch time fetchdecode ex bne bne foo sub r2,r3,r6 fetchdecode foo add r0,r1,r2 ex bne fetchdecode ex add ex bne

13 Delayed branch To increase pipeline efficiency, delayed branch mechanism requires n instructions after branch always executed whether branch is executed or not

14 Example: ARM7 execution time Determine execution time of FIR filter: for (i=0; i { "@context": "", "@type": "ImageObject", "contentUrl": "", "name": "Example: ARM7 execution time Determine execution time of FIR filter: for (i=0; i

15 ARM10 processor execution time Impossible to describe briefly the exact behavior of all instructions in all circumstances Branch prediction Prefetch buffer Branch folding The independent Load/Store Unit Data alignment How many accesses hit in the cache and TLB

16 ARM10 integer core 3 instr’s

17 Integer core Prefetch Unit Fetches instructions from I-cache or external memory Predicts the outcome of branches whenever it can Integer Unit Decode Barrel shifter, ALU, Multiplier Main instruction sequencer Load/store Unit Load or store two registers(64bits) per cycle Decouple from the integer unit after the first access of a LDM or STM instruction Supports Hit-Under-Miss (HUM) operation

18 Pipeline Fetch I-cache access, branch prediction Issue Initial instruction decode Decode Final instruction decode, register read for ALU op, forwarding, and initial interlock resolution Execute Data address calculation, shift, flag setting, CC check, branch mispredict detection, and store data register read Memory Data cache access Write Register writes, instruction retirement

19 Typical operations

20 Load or store operation

21 LDR operation that misses

22 Interlocks Integer core forwarding to resolve data dependencies between instructions Pipeline interlocks Data dependency interlocks: Instructions that have a source register that is loaded from memory by the previous instruction Hardware dependency A new load waiting for the LSU to finish an existing LDM or STM A load that misses when the HUM slot is already occupied A new multiply waiting for a previous multiply to free up the first stage of the multiply

23 Pipeline forwarding paths

24 Example of interlocking and forwarding Execute-to-execute movr0, #1 addr1, r0, #1 Memory-to-execute movr0, #1 subr1, r2, #2 add r2, r0, #1

25 Example of interlocking and forwarding, cont’d Single cycle interlock ldr r0, [r1, r2] str r3, [r0, r4] fetchdecodewritememoryexecuteissue ldr r1+r2 fetchdecodewritememoryexecuteissue str r0 read r0+r4 r3 read r3 write

26 Superscalar execution Superscalar processor can execute several instructions per cycle. Uses multiple pipelined data paths. Programs execute faster, but it is harder to determine how much faster.

27 Data dependencies Execution time depends on operands, not just opcode. Superscalar CPU checks data dependencies dynamically: add r2,r0,r1 add r3,r2,r5 data dependency r0r1 r2r5 r3

28 Memory system performance Caches introduce indeterminacy in execution time Depends on order of execution Cache miss penalty: added time due to a cache miss Several reasons for a miss: compulsory, conflict, capacity

Download ppt "CPUs CPU performance CPU power consumption Elements of CPU performance Cycle time CPU pipeline Memory system."

Similar presentations

Ads by Google