Presentation is loading. Please wait.

Presentation is loading. Please wait.

Architecture Background

Similar presentations


Presentation on theme: "Architecture Background"— Presentation transcript:

1 Architecture Background
Von Neumann architecture - cpu = ALU + control unit - memory - devices Above connected with buses ALU – carry out instruction, may have more than one execution unit Program counter – memory address of next instruction to fetch..

2 Hardware instructions
Program counter – memory address of next instruction to fetch.. i = i + 1; One high-level-language instruction can be many assembly instructions. (Load, Add, Store) What about programs running in parallel?

3 More executing code Memory - programs (code and data) must be
in memory to run and use --> process Start Fetch Execute Halt Fetch and Execute cycles

4 Interrupts Interrupts - A mechanism where modules may
interrupt the normal processing of the processor. Used to improve processor efficiency. (I/O is slow, so overlap I/O with cpu operations) How do interrupts affect the Fetch and Execute cycles picture?

5 Interrupt cycle Add in an interrupt cycle. If there is a
pending interrupt, processor suspends execution of running program and executes interrupt handler If interrupts are disabled Start Fetch Execute Halt Intr enabled Check interrupts

6 Memory Hierarchy Speed Cost Size Volatile NON-Volatile CPU Registers
L1 Cache $ Volatile RAM main memory L2 Cache $ NON-Volatile Speed Cost Size Disks Optical Tapes C.Flash

7 Memory Load a word from memory to a register
Store a register to memory RAM – usually too small – need swap (virtual memory) Use disks for swap space as well as to store programs and data. Unused memory is wasted memory!

8 Cache Memory Faster, but more expensive than RAM Cache Lines
Memory is accessed during fetch-execute cycle If we can cache data in faster cache memory we can achieve better perform. Hits and misses (like hits and outs)

9 Cache Memory The more hits we have with cache the
better the performance. A “hit ratio” is like a batting average. The cache is much smaller than what is being cached. Have multiple levels of cache. Also, we'll consider having data in main memory vs. secondary memory later.

10 Cache Memory The key to having a good hit ratio is to
take advantage of the principle of locality. Memory references tend to cluster while a program is running. (for both data and instructions) In a loop the same instructions are executed many times. When working with data, it may be accessed many times.

11 Cache Memory Over a short period of time the same
memory is accessed many times. --> which is why using cache works Can apply the same principle over many levels of the memory hierarchy. L1 Cache – L2 Cache – RAM – swap KB MB GB GB

12 Locality Two types of locality Spatial – access to memory locations
are clustered. - execute instructions sequentially - access to data locations when processing a table Temporal – access to memory that was recently used. - a loop has instructions repeated

13 Cache Memory Have H be the “hit ratio”.
Let L1 cache access time be 0.1 uS Let L2 cache access time be 1 uS If H is 0.95 then (0.95)(0.1 uS) + (0.05)(0.1 uS + 1uS) = = 0.15uS Hits + misses Access time is much closer to L1 access time with the high hit ratio.

14 More memory issues Virtual address -> physical address
mapping done by memory management unit (MMU) Context switch – switching from one program to another - slow (expensive) - copy registers - consider cache lines

15 Current CPU technology
Intel/AMD – x86, x64 Dual core, Quad core multiprocessor Sparc Niagara T1 and T2 - 8 core, 4 and 8 threads per core Power use a big issue now


Download ppt "Architecture Background"

Similar presentations


Ads by Google