Presentation is loading. Please wait.

Presentation is loading. Please wait.

Csci4203/ece43631 Review Quiz. 1)It is less expensive 2)It is usually faster 3)Its average CPI is smaller 4)It allows a faster clock rate 5)It has a simpler.

Similar presentations


Presentation on theme: "Csci4203/ece43631 Review Quiz. 1)It is less expensive 2)It is usually faster 3)Its average CPI is smaller 4)It allows a faster clock rate 5)It has a simpler."— Presentation transcript:

1 csci4203/ece43631 Review Quiz

2 1)It is less expensive 2)It is usually faster 3)Its average CPI is smaller 4)It allows a faster clock rate 5)It has a simpler design 1) Why a multicycle implementation can be better than a single cycle implementation Answer: 1,2,4

3 1)Increase throughput 2)Decrease the latency of each operation 3)Decrease cache misses 4)Increase TLB misses 5)Decrease clock cycle time 2) A pipelined implementation can Answer: 1,5

4 1)Data forwarding 2)Instruction scheduling 3)Out-of-order superscalar implementation 4)Deeper pipelined implementation 3) Which of the following techniques can reduce penalty of data dependencies Answer: 1,2,3

5 1)It increases ILP for more effective scheduling 2)It reduces branch instructions 3)It decreases instruction cache misses 4)It reduces memory operations 5)It reduces compile time 4) What is the advantages of loop unrolling? Answer: 1,2

6 1)It decreases cache miss rate 2)It doubles cache access bandwidth 3)It is less expensive 4)It is easier to design 5) What is the advantages of a split cache Answer: 2

7 1)Deep pipelining 2)Accurate branch prediction 3)An efficient cache hierarchy 4)Speculative execution support 6) Which of the micro-architectures are likely to boost performance of the following loop Answer: 2, 3, 4 While (p!= NULL) { m=p->data; p= p->next; }

8 1)About 10% 2)About 50% 3)About 66% 4)About 33% 5)About 1% 7) Using the 2-bit branch prediction scheme, what will be the branch misprediction rate for the following loop. Answer: 4 For (i=1; i<10000; i++) { for (j=1; j<3; j++) statements; }

9 1)L1 write-through and L2 write-back 2)L1 is united and L2 is split 3)L1 has a line size larger than L2 line size 4)L1 is private and L2 is shared in CMP 8) You have decided to design a cache hierarchy with two levels caching – L1 and L2. Which of the following configurations are likely to be used? Answer: 1, 4

10 1)Use larger pages 2)Increase the size of TLB 3)Two levels of TLB 4)Disk caching – keep frequently used files in memory. 9) It takes a long time (millions of cycles) to get the first byte from a disk. So what should be done to reduce the cost. Answer: 1, 4

11 1)Two level page tables 2)Hashing tables 3)Linked list 4)Sequential search table 10) Page table is large and often space inefficient. What techniques can be used to deal with it? Answer: 1, 2

12 1)Requested word first (or critical word first) 2)Multi-level caches 3)Increase the size of main memory without interleaving 4)Have a faster memory bus 11) Which of the following techniques may reduce the cache miss penalty. Answer: 1, 2, 4

13 1)Split the cache into two 2)Increase associativity 3)Increase line size 4)Cache prefetching 12) Which of the following techniques can usually help reducing total penalty of capacity misses? Answer: 3, 4

14 13) Cache performance is more important in which of the following conditions? 1)When the bus bandwidth is insufficient 2)When CPI perfect is low and the clock rate is high 3)When CPI perfect is high and the clock rate is low 4)When the main memory is not large enough Answer: 1,2

15 1)TLB caches frequently used Virtual-to-Physical translations 2)TLB is usually smaller than caches 3)TLB uses write-through policy 4)TLB misses can be handled by either software or hardware 14) Which of the following statements are true for TLB Answer: 1, 2, 4

16 1)Virtual memory support 2)Data path design 3)Control path design 4)Floating point functional unit 5)Cache design 15) Which of the following designs will see greater impact when we move from the 32-bit MIPS architecture to 64-bit MIPS? Answer: 1, 2, 5

17 1)Microprogramming can be used to implement structured control design. 2)Microprogramming simplifies control designs and allows for a faster, more reliable design. 3)Microprogramming control yield faster processor 4)Microprogramming is used in recent Intel Pentium processors 16) Which of the following statements are true for Microprogramming? Answer: 1, 2, 4

18 1)Control hazards 2)Data hazards 3)Floating point exceptions 4)Structure hazards 17) In a pipelined implementation, what hazards may often occur? Answer: 1, 2, 4

19 18) My program has a very high cache miss rate (> 50%). I traced it down to the following function. What cache miss type is it? Assume a typical cache with 32B line size. Answer: 3 functionA(float *a, float *b, int n) { for (i=1,i<n; i++) *a++ = *a + *b++; } 1)Capacity miss 2)Compulsory miss 3)Conflict miss 4)Cold miss

20 1)To support multiprogramming 2)To increase system throughput 3)To allow efficient and safe sharing 4) To remove programming burdens of a small physical memory 19) What are the major motivations for Virtual Memory? Answer: 3, 4

21 1)Page fault is most expensive 2)L1 cache miss may cost less than a branch misprediction 3)TLB misses usually cost more than L1 misses. 4)A TLB miss will always cause a respective L1 miss 20) Among page fault, TLB miss, branch misprediction and cache misses, which of the following statements are true? Answer: 1, 2, 3

22 1) LRU 2) Random 3) MRU 4) NRU 5) FIFO 21) Assume we have a four line fully associative instruction cache. Which replacement algorithm works best for the following loop. Answer: 3 For (i=1; i<n; i++) { line1; line2; line3; line 4; line5; }

23 1)Branch prediction 2)Loop unrolling 3)Procedure in-lining 4)Predicated execution 22) Which of the following techniques can reduce control hazards in a pipelined processor? Answer: 1,2,3,4

24 23) Which techniques can be used to reduce conflict misses? Answer: 1,2 1)Increase associativity 2)Adding a victim cache 3)Using large lines 4)Use shared caches

25 24) Which of the following statements are true for RAID (Redundant Array of Inexpensive Disks). 1)RAID0 has no redundancy 2)RAID1 is most expensive 3)RAID5 has the best reliability 4)RAID4 is better than RAID3 because it supports efficient small reads and writes Answer:1, 2, 4

26 1)Predicated instructions 2)Software controlled data speculation 3)Static branch prediction 4)Software controlled cache prefetching 25) Adding new features to a machine may require changes to be made to the ISA. Which of the following features can get by without changing the ISA? Answer: 3,4

27 1)Associativity 2)Line size 3)Replacement algorithms 4)Write policy 5)Tag size 26) A branch predictor is similar to a cache in many aspects. Which of the following cache parameters can be avoided in a simple branch predictor? Answer: 1,2,3,4,5

28 1) Tag = 19, Index= 8 2) Tag = 20, Index= 10 3) Tag = 21, Index= 8 4) Tag = 17, Index= 10 27) Assume cache size=32KB, line size=32B, what are the number of bits used for index and tag in a 4- way associative cache? Assume 16GB of physical memory. Answer: 3


Download ppt "Csci4203/ece43631 Review Quiz. 1)It is less expensive 2)It is usually faster 3)Its average CPI is smaller 4)It allows a faster clock rate 5)It has a simpler."

Similar presentations


Ads by Google