Presentation is loading. Please wait.

Presentation is loading. Please wait.

EEC4133 Computer Organization & Architecture Chapter 7: Memory by Muhazam Mustapha, April 2014.

Similar presentations


Presentation on theme: "EEC4133 Computer Organization & Architecture Chapter 7: Memory by Muhazam Mustapha, April 2014."— Presentation transcript:

1 EEC4133 Computer Organization & Architecture Chapter 7: Memory by Muhazam Mustapha, April 2014

2 Learning Outcome By the end of this chapter, students are expected to be able to understand and explain the concepts of memory, cache memory and virtual memory, and construct a given memory structure Most of the material in this slide set is adopted from Murdocca & Heuring 2007

3 Chapter Content Memory and Memory Organization Cache Memory Virtual Memory

4 Memory & Memory Organization

5 The Memory Hierarchy

6 Simple RAM Cell SRAM (a) and DRAM (b):

7 Flash Memory Cell

8 RAM Pinout

9 Memory Organization 4-word × 4-bit memory:

10 Memory Organization Simplified:

11 Memory Organization Use two 4-word × 4-bit RAM to construct 4-word × 8-bit RAM (4-word × 1-byte):

12 Memory Organization Use two 4-word × 4-bit RAM to construct 8-word × 4-bit RAM:

13 Single-In-Line Memory Module 256 MB dual in- line memory module organized for a 64-bit word with 16 16M × 8- bit RAM chips (eight chips on each side of the DIMM)

14 ROM from Decoder and OR Gate

15 RAM as Look Up Table ALU

16 Cache Memory

17 Improving Memory Performance Memory as the main component in a computing system oftentimes couldn’t meet the microprocessor’s specification –It is too slow to cope up with the current microprocessor’s speed –It is too small to load the entire modern code and multimedia data Smart workarounds are needed to solve these problems

18 Improving Memory Performance To solve speed problems memory caching (cache memory) scheme was invented To solve size problems memory virtualizing (virtual memory) scheme was invented μPμP Main Memory Virtual Memory Cache Memory Fast Limitless Space SRAMDRAMHarddisk

19 Cache Memory [Definition] Cache memory is a much smaller piece of memory than the main memory, but much faster, that is used to temporarily keep the content of the main memory where the microprocessor will be accessing, instead of directly to / from the main memory Content from the main memory will be loaded into cache on demand

20 Cache Memory Since cache memory is smaller, we can only keep a small portion of main memory in it Cache memory is subdivided into slots – 1 slot keeps 1 portion of main memory Hence there is a need to keep track which part of main memory is being kept in cache slot, this process is called mapping

21 Mapping Scheme [Definition] Mapping in cache memory is the scheme to link which slot in cache is holding which part in main memory There are 3 mapping schemes: –Associative Mapping –Direct Mapping –Set Associative Mapping

22 Associative Mapping [Definition] Associative Mapping is a cache mapping scheme that allows any block in memory to be kept by any slot in cache The link to the right main memory location is resolved by tagging The MSB part of the address is used as tags, LSB is used to address the specific one in slot

23 Associative Mapping

24 Replacement Policy Since cache is smaller, there are times when it is full, that it needs to flush the current content and reload with new memory demand With associative mapping we need replacement policy to control which slot to be flushed and replaced

25 Replacement Policy Type of replacement policies: –Least recently used (LRU) –First-in/first-out (FIFO) –Least frequently used (LFU) –Random –Optimal (used for analysis only – look backward in time and reverse-engineer the best possible strategy for a particular sequence of memory references)

26 Direct Mapping [Definition] Direct Mapping is a mapping scheme that allows blocks in memory to be kept by one specific slot in cache Which memory block is still resolved by tagging since many blocks go to 1 slot The MSB part of the address is used as tags and slot number, LSB is used to address the specific one in slot

27 Direct Mapping

28 Set Associative Mapping [Definition] Set Associative Mapping is a cache mapping scheme that consolidates the associative and direct mapping schemes: –Slots are grouped into sets –Sets are directly mapped –Slots in sets are associatively mapped

29 Set Associative Mapping The link to the right main memory location is also resolved by tagging The MSB part of the address is used as tags and set numbers, LSB is used to address the specific one in slot

30 Set Associative Mapping

31 Read & Write Schemes

32 Hit Ratios & Effective Access Times

33 Virtual Memory

34 Main Memory Size Limit The size of memory ICs has expanded a lot since the last decades This memory size explosion however can never catch-up with the bigger explosion of software size in term of its code area and data area – especially with multimedia Hence there is a need to load only the required portion of data from harddisk into memory

35 Overlaying [Definition] Overlaying is a system whereby the parts of software to be loaded and flushed are decided and done manually by the programmer

36 Virtual Memory [Definition] Virtual Memory is a system whereby the parts of software to be loaded from and flushed into harddisk are decided and done automatically and transparently by the operating system Again this raises the need to map the content of real main memory with the one virtual in harddisk

37 Virtual Memory

38 Page Table This is the mapping scheme between real and virtual memory

39 Address Translation

40 Segmentation Segmentation allows 2 users to share the same code with different data:

41 Fragmentation After many loads and flushes, main memory may be fragmented – can be defragmented:


Download ppt "EEC4133 Computer Organization & Architecture Chapter 7: Memory by Muhazam Mustapha, April 2014."

Similar presentations


Ads by Google