2Chapter 8: Memory Management BackgroundSwappingContiguous Memory AllocationSegmentationPagingStructure of the Page Table
3ObjectivesTo provide a detailed description of various ways of organizing memory hardwareTo discuss various memory-management techniques, including:Paging, andSegmentation
4BackgroundFor it to be run, a program must be brought (from disk) into memory and placed within a processInput queue – stores which programs on disk are ready to be brought into memory to executeMain memory and registers are only storage CPU can access directlyMemory unit only sees:read requests + the correspond addresseswrite requests + the correspond addresses and data
5Recap: Storage Hierarchy Storage systems organized in hierarchySpeedCost (per byte of storage)Volatility
6Recap: Performance of Various Levels of Storage Register access in 1 CPU clock (or less)0.25 – 0.50 ns (1 nanosec = 10-9 sec)Main memory can take many cycles, causing the CPU to stallns ( times slower)How to solve? -- cachingCache sits between main memory and CPU registers
7Base and Limit Registers Protection of memory is required to ensure correct operationProtect OS from processesProtect processes from other processesHow to implement memory protection?Example of one simple solution using basic hardwareA pair of base and limit registers define the logical address spaceCPU must check every memory access generated in user mode to be sure it is between base and limit for that user
9Address BindingIn most cases, a user program goes through several steps before being executedCompilation, linking, executable file, loader creates a processSome of which may be optionalAddresses are represented in different ways at different stages of a program’s lifeEach binding maps one address space to anotherSource code -- addresses are usually symbolicE.g., variable countA compiler typically binds these symbolic addresses to relocatable addressesE.g., “14 bytes from beginning of this module”Linker or loader will bind relocatable addresses to absolute addressesE.g., 74014
10Binding of Instructions and Data to Memory Address binding of instructions and data to memory addresses can happen at 3 different stages:Compile time:If you know at compile time where the process will reside in memory, then absolute code can be generated.Must recompile the code if starting location changesExample: MS DOS .com programsLoad time:Compiler must generate relocatable code if memory location is not known at compile timeIf the starting address changes, we need only reload the user code to incorporate this changed value.Execution time:If the process can be moved during its execution from one memory segment to another, then binding must be delayed until run timeSpecial hardware must be available for this scheme to workMost general-purpose operating systems use this method
11Logical vs. Physical Address Space The concept of a logical address space that is bound to a separate physical address space is central to proper memory managementLogical address (=virtual address) – generated by the CPUThis is what a process sees and usesPhysical address – address seen by the memory unitVirtual addresses are mapped into physical addresses by the systemLogical address spaceis the set of all logical addresses generated by a programPhysical address spaceis the set of all physical addresses generated by a program
12Logical vs. Physical Address Space Logical addresses and physical addressesAre the same forcompile-time and load-time address-binding schemesDifferent forexecution-time address-binding scheme
13Memory-Management Unit (MMU) Hardware device that (at run time) maps virtual address to physical addressMany methods for such a mapping are possibleSome are considered nextTo start, consider simple scheme:The value in the relocation register is added to every address generated by a user process at the time it is sent to memoryBase register is now called relocation registerMS-DOS on Intel 80x86 used 4 relocation registersThe user program deals with logical addressesIt never sees the real physical addressesExecution-time binding occurs when reference is made to location in memoryLogical address bound to physical addresses
15Dynamic LoadingIn our discussion so far, it has been necessary for the entire program and all data of a process to be in physical memory for the process to execute.Dynamic Loading -- routine is not loaded (from disk) until it is calledBetter memory-space utilization; unused routine is never loadedAll routines kept on disk in relocatable load formatUseful when large amounts of code are needed to handle infrequently occurring casesNo special support from the operating system is requiredImplemented through program designOS can help by providing libraries to implement dynamic loading
16Dynamic Linking Some OS’es support only static linking Static linking – system libraries and program code combined by the loader into the binary program imageDynamic linkingLinking is postponed until execution timeSimilar to dynamic loading, but linking, rather than loading, is postponedUsually used with system libraries, such as language subroutine librariesWithout this, each program must include a copy of its language library (or at least the routines referenced by the program) in the executable image.This wastes both disk space and main memory
17Dynamic Linking (cont. 1) Dynamically linked libraries are system libraries that are linked to user programs when the programs are runWith dynamic linking, a stub is included in the image for each library routine reference.The stub is a small piece of code that indicates:how to locate the appropriate memory-resident library routine, orhow to load the library if the routine is not already presentStub replaces itself with the address of the routine, and executes the routineThus, the next time that particular code segment is reached, the library routine is executed directly, incurring no cost for dynamic linking.Under this scheme, all processes that use a language library execute only 1 copy of the library code
18Dynamic Linking (cont. 2) Dynamic linking is particularly useful for librariesSystem also known as shared librariesExtensions to handle library updates (such as bug fixes)A library may be replaced by a new version, and all programs that reference the library will automatically use the new versionNo relinking is necessaryVersioning may be neededIn case the new library is incompatible with the old onesMore than one version of a library may be loaded into memoryeach program uses its version information to decide which copy of the library to use.Versions with minor changes retain the same version number, whereas versions with major changes increment the number.
19Dynamic Linking (cont. 3) Unlike dynamic loading, dynamic linking and shared libraries generally require help from the OS.If the processes in memory are protected from one another, then the OS is the only entity that can check to see whether the needed routine is in another process’s memory spaceor that can allow multiple processes to access the same memory addressesWe elaborate on this concept when we discuss paging
20SwappingA process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued executionTotal physical memory space of processes can exceed physical memoryThis increases the degree of multiprogramming in a systemBacking store – fast disk,large enough to accommodate copies of all memory images for all users;must provide direct access to these memory imagesSystem maintains a ready queue of ready-to-run processes which have memory images on disk or in memoryRoll out, roll in – swapping variant used for priority-based scheduling algorithms;lower-priority process is swapped out so higher-priority process can be loaded and executedMajor part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped
21Swapping (Cont.)Modified versions of swapping are found on many systemsFor example, UNIX, Linux, and WindowsSwapping normally disabledStarted if more than threshold amount of memory allocatedDisabled again once memory demand reduced below threshold
23Context Switch Time including Swapping If next processes to be put on CPU is not in memory, need to swap out a process and swap in target processContext switch time can then be very high100MB process swapping to hard disk with transfer rate of 50MB/secSwap out time of 2000 msPlus swap in of same sized processTotal context switch swapping component time of 4000ms (4 seconds)Can reduce if reduce size of memory swapped – by knowing how much memory really being usedSystem calls to inform OS of memory use via request_memory() and release_memory()
24Context Switch Time and Swapping (Cont.) Other constraints as well on swappingPending I/O – can’t swap out as I/O would occur to wrong processOr always transfer I/O to kernel space, then to I/O deviceKnown as double buffering, adds overheadStandard swapping not used in modern operating systemsBut modified version commonSwap only when free memory extremely low
25Swapping on Mobile Systems Not typically supportedFlash memory basedSmall amount of spaceLimited number of write cyclesPoor throughput between flash memory and CPU on mobile platformInstead use other methods to free memory if lowiOS asks apps to voluntarily relinquish allocated memoryRead-only data thrown out and reloaded from flash if neededFailure to free can result in terminationAndroid terminates apps if low free memory, but first writes application state to flash for fast restartBoth OSes support paging as discussed below
26Contiguous Allocation Main memory must support both OS and user processesLimited resource, must allocate efficientlyHow? -- Many methodsContiguous memory allocation is one early methodeach process is contained in a single section of memory that is contiguous to the section containing the next process.Main memory is usually divided into 2 partitions:Resident operating systemusually held in low memoryUser processesusually held in high memoryeach process is contained in single contiguous section of memory
27Contiguous Allocation: Memory Protection Relocation registers used to protect user processes from each other, and from changing operating-system code and dataRelocation register contains the value of the smallest physical address for the processLimit register contains range of logical addresses for the processeach logical address must be less than the limit registerMMU maps logical address dynamically
28Hardware Support for Relocation and Limit Registers
29Contiguous Allocation: Memory Allocation Multiple-partition allocationOne of the simplest methodsOriginally used by the IBM OS/360 operating system (called MFT)no longer in use.Divide memory into several fixed-sized partitionsEach partition may contain exactly 1 processThus, the degree of multiprogramming is limited by the number of partitionsWhen a partition is free, a process is selected from the input queue and is loaded into it.When the process terminates, its partition becomes free
30Contiguous Allocation: Memory Allocation Variable-partition schemeGeneralization of the previous methodIdea: use variable-partition sizes for efficiencySized to a given process’ needsInitially, all memory is available for user processes and is considered one large block of available memory (a hole).Hole – block of available memory;Holes of various size are scattered throughout memoryOperating system maintains information about:allocated partitionsfree partitions (holes)When a process arrives, it is allocated memory from a hole large enough to accommodate itProcess exiting frees its partitionadjacent free partitions combined
32Dynamic Storage-Allocation Problem How to satisfy a request of size n from a list of free holes?First-fit: Allocate the first hole that is big enoughBest-fit: Allocate the smallest hole that is big enoughMust search entire list, unless ordered by sizeProduces the smallest leftover holeWorst-fit: Allocate the largest holeMust also search entire listProduces the largest leftover holeFirst-fit and best-fit are better than worst-fit in terms of speed and storage utilization
33Fragmentation Memory allocation can cause fragmentation problems: External FragmentationInternal Fragmentation
34External Fragmentation First-fit and best-fit strategies suffer from external fragmentation.As processes are loaded and removed from memory, the free memory space is broken into little piecesExternal Fragmentation – total memory space exists to satisfy a request, but it is not contiguousIf all these small pieces of memory were in one big free block instead, we might be able to run several more processes.Analysis of the first-fit strategy reveals that, given N blocks allocated, another 0.5 N blocks will be lost to fragmentationThat is, 1/3 of memory may be unusable!This is known as 50-percent ruleFree
35Fragmentation (Cont.) Some of the solutions to external fragmentation: CompactionSegmentationPaging
36Fragmentation: Compaction Shuffle memory contents to place all free memory together in 1 large blockCompaction is possible only if relocation (i.e, address binding) is dynamic, and is done at execution timeIf addresses are relocated dynamically, relocation requires only:moving the program and data, and thenchanging the base register to reflect the new base address.I/O can cause problemsLatch job in memory while it is involved in I/ODo I/O only into OS buffersRelocation is the process of assigning load addresses to various parts of a program and adjusting the code and data in the program to reflect the assigned addresses
37Internal Fragmentation ExampleA hole of 10,002 bytesA process requests 10,000 bytes.After allocation: a hole of 2 bytes.Problem: The overhead to keep track of this 2-byte hole will be much larger than the hole itself.Solution:Break the physical memory into fixed-sized blocksAllocate memory in units based on fixed block size.An issue with this approach is internal fragmentationExplained next
38Internal Fragmentation Solution:Allocate memory in units based on fixed block size.Issue:The memory allocated to a process may be slightly larger than the requested memory.Unused memory that is internal to a partition is internal fragmentationExample:Block size is 4KProcess request 16K + 2Bytes space5 blocks of size 4K will be allocated(4K – 2) bytes are wasted in the last block due to internal fragmentation
39User’s / Programmer’s View of a Program Segmentation: a Memory-Management SchemeMotivationMost programmers do not think of of memory as a linear array of bytes, that contains instructions and data.Rather, they view it as a collection of variable-sized logical segments,with no necessary ordering among the segmentsProgrammer talks about “the stack,” “the math library” without caring what addresses in memory they occupy.What if the hardware could provide a memory mechanism that mapped the programmer’s view to the actual physical memory?User’s / Programmer’s View of a Program
40SegmentationSolution: Segmentation – a memory-management scheme that supports “programmer/user view” of memoryA logical address space is a collection of segments.A segment is a logical unit, such as:main program, procedure, function, method,object, local variables, global variables, common block,stack, symbol table, arraysEach segment has a name and a length.The addresses specifysegment nameoffset within the segmentFor simplicity of implementation, segments are numbered and are referred to by a segment number, rather than by a segment name.
41Logical View of Segmentation 14231234user spacephysical memory space
42Segmentation Architecture Logical address consists of a two tuple:<segment-number, offset>How to map this 2D user-defined address into 1D physical address?Segment table – each table entry has:base – contains the starting physical address where the segments reside in memorylimit – specifies the length of the segmentSegment-table base register (STBR)Points to the segment table’s location in memorySegment-table length register (STLR)Indicates number of segments used by a program;Segment number s is legal if s < STLR
43Segmentation Architecture (Cont.) ProtectionWith each entry in segment table associate:validation bit = 0 illegal segmentread/write/execute privilegesProtection bits associated with segments; code sharing occurs at segment levelSince segments vary in length, memory allocation is a dynamic storage-allocation problemA segmentation example is shown in the following diagram
45PagingSegmentationPros: permits the physical address space of a process to be noncontiguous.Cons: can suffer from external fragmentation and needs compactionAny alternatives to it?Paging -- another memory-management schemeBenefits of PagingPhysical address space of a process can be noncontiguousProcess is allocated physical memory whenever the latter is availableAvoids external fragmentationAvoids problem of varying sized memory chunks
46Main Idea of PagingDivide physical memory into fixed-sized blocks called framesSize is power of 2Typically between 512 bytes and 16 MbytesDivide logical memory into blocks of same size called pagesKeep track of all free framesTo run a program of size N pages, need to find N free frames and load programSet up a page table to translate logical to physical addressesOne table per processStill have internal fragmentation
47Address Translation Scheme Address generated by CPU is divided into:Page number (p) – used as an index into a page table which contains base address of each page in physical memoryPage offset (d) – combined with base address to define the physical memory address that is sent to the memory unitFor given logical address space 2m and page size 2n
53Implementation of Page Table Page table is kept in main memoryPage-table base register (PTBR)Stores the physical address of page tableChanging page tables requires changing only this one registerSubstantially reduces context-switch timePage-table length register (PTLR)Indicates the size of the page table
54The 2 memory access problem In this scheme every data/instruction access requires 2 memory accesses:One for the page table and (to get the frame number)One for the data / instructionEfficiency is lost
55The 2 memory access problem Solution: (Use of TLB’s)Use of a special fast-lookup hardware cache, called TLB:associative memory, ortranslation look-aside buffers (TLBs)TLBCaches (p, f) tuples for frequently used pagesThat is, the mapping from p to the corresponding fSmallVery fast
56Associative Memory Associative memory – parallel search Address translation (p, d)If p is in TLB then get the corresponding frame f outHardware searches in parallel all entries at the same timeVery fastelse get the frame f from the page table in memory
57Implementation of Page Table (Cont.) Some TLBs store address-space identifiers (ASIDs) in each TLB entryASID uniquely identifies each process to provide address-space protection for that processOtherwise need to flush TLB at every context switchTLBs typically small (64 to 1,024 entries)On a TLB miss, value is loaded into the TLB for faster access next timeReplacement policies must be consideredSome entries can be wired down for permanent fast accessPIDPage#Frame#
59Effective Access Time Associative Lookup time units Can be < 10% of memory access timeHit ratio = Percentage of times that a page number is found in the TLBHit ratio is related to number of associative registers in TLBEffective Access Time (EAT)EAT = (1 + ) + (2 + )(1 – )= 2 + – // Hit: 1 memory access TLB access => (1 + )// Miss: 2 memory accesses + 1 TLB access => (2 + )// here is measured as a fraction of memory access time (not in time units)Consider = 80%, = 20ns for TLB search, 100ns for memory accessEAT = 0.80 x x 220 = 140 ns //120 = (100+20), 220=(2*100+20)Consider more realistic hit ratio: = 99%EAT = 0.99 x x 220 = 121 ns
60Memory Protection: Typical Page Table Entry Misc bit(s)Valid bit
61Memory ProtectionMemory protection in a paged environment is accomplished by protection bits associated with each frame.For example, protection bit to indicate ifread-only orread-write access is allowedCan also add more bits to indicate page execute-only, and so onNormally, these bits are kept in the page table.Valid-invalid bit attached to each entry in the page table:“valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page“invalid” indicates that the page is not in the process’ logical address spaceSome system have page-table length register (PTLR)Can also be used to check if address is validAny violations result in a trap to the kernel
63Shared PagesAn advantage of paging is the possibility of sharing data and common codeShared codeImportant in a time-sharing environmentEx: A system that supports 40 users, each executes a text editorA single copy of read-only (reentrant) code shared among processesFor example, text editors, compilers, window systemsReentrant code is non-self-modifying code: never changes during exec.This is similar to multiple threads sharing the same process spaceEach process has its own copy of registers and data storage to hold the data for the process’s execution.The data for two different processes will, of course, be different.Shared dataSome OSes implement shared memory using shared pages.
65Structure of the Page Table Memory structures for paging can get huge using straightforward methodsConsider a 32-bit logical address space as on modern computersPage size of 1 KB (210)Page table would have 4 million entries (232 / 210= 222)Problem: If each entry is 4 bytes -> 16 MB of physical address space / memory for page table aloneThis is per processThat amount of memory used to cost a lotDo not want to allocate that contiguously in main memorySolution: One simple solution to this problem is to divide the page table into smaller pieces.We can accomplish this division in several ways, e.g.:Hierarchical PagingHashed Page TablesInverted Page Tables
66Hierarchical Page Tables Break up the logical address space into multiple page tablesA simple technique is a two-level page tableFollowing several slides derived from the OS book by A Tanenbaum
67Two-Level Paging Example 32-bit virtual address is partitioned, for example as10-bit PT1 field10-bit PT2 field12-bit offset12-bit offset => pages are 4K(=212) and 220 of themThe secret to the multilevel page table method is to avoid keeping all the page tables in memory all the time.In particular, those that are not needed should not be kept around.Suppose, that a process needs 12 MB:the bottom 4 MB of memory for program text,the next 4 MB for data, andthe top 4 MB for the stack.Hence, a gigantic hole that is not usedin between the top of the data and the bottom of the stack
68Two-Level PagingThe top-level page table, with 1024 entries, corresponding to the 10-bit PT1 field.MMU uses the value of the PT1 field from the virtual address as an index into the top-level page table.Each of these 1024 entries in the top-level page table represents 4 MB4 GB (i.e., 32-bit) virtual address space has been chopped into 1024 chunks4 GB / 1024 = 4 MB
69Two-Level PagingThe entry located by indexing into the top-level page table yields the address or the page frame number of a second-level page table.The top-level page tableEntry 0: points to the page table for the program textEntry 1: to the page table for the dataEntry 1023: to the page table for the stackThe other (shaded) entries are not usedNo need to generate page tables for themSaving lots of space!The PT2 field is now used as an index into the selected second-level page table to find the page frame number for the page itself.
70Two-Level Paging: Example Example: consider the 32-bit virtual address 0x (4,206,596 decimal),which is 12,292 bytes into the data (4,206,596 – 4,194,304= 12,292).Corresponds to PT1 = 1, PT2 = 3, and Offset = 4.The MMU first uses PT1 to index into the top level page tableIt obtains entry 1, which corresponds to addresses 4M to 8M − 1.It finds the corresponding 2-level page table for entry 1It then uses PT2 to index into the second-level page table just foundIt extract entry 3,which corresponds to addresses 12288(=3*4K) to within its 4M chunk(i.e., absolute addresses 4,206,592 to 4,210,687).This entry contains the page frame number of the page containing virtual address 0xIf that page is not in memory, the valid_bit =0, causing a page fault.If the page is present in memory, the page frame number taken from the second-level page table is combined with the offset (=4) to construct the physical address. This address is put on the bus and sent to memory.
71Two-Level PagingNotice, although the address space contains over a million pages, only 4 page tables are needed:the top-level table,the 2nd-level table for 0 to 4M (for the program text),the 2nd-level table 4M to 8M (for the data), andthe 2nd-level table for the top 4M (for the stack).Valid bits in the remaining 1021 entries of the top-level page table are =0forcing a page fault if they are ever accessed.The 2-level page table system can be expanded to 3, 4, or more levels.Because address translation works from the outer page table inward, this scheme is also known as a forward-mapped page table.
73Two-Level Paging Example A logical address (on 32-bit machine with 1K page size) is divided into:a page number consisting of 22 bitsa page offset consisting of 10 bitsSince the page table is paged, the page number is further divided into:a 12-bit page numbera 10-bit page offsetThus, a logical address is as follows:where p1 is an index into the outer page table, and p2 is the displacement within the page of the inner page tableBecause address translation works from the outer page table inward, this scheme is also known as a forward-mapped page table.
74Address-Translation Scheme By partitioning the page table in this manner, the operating system can leave partitions unused until a process needs them.Entire sections of virtual address space are frequently unused, and multilevel page tables have no entries for these spaces, greatly decreasing the amount of memory needed to store virtual memory data structures.
7564-bit Logical Address Space Even two-level paging scheme not sufficientIf page size is 4 KB (212)Then page table has 252 entriesIf two level scheme, inner page tables could be byte entriesAddress would look likeOuter page table has 242 entries or 244 bytesOne solution is to add a 2nd outer page tableBut in the following example the 2nd outer page table is still 234 bytes in sizeAnd possibly 4 memory access to get to one physical memory location
77Hashed Page Tables Common in address spaces > 32 bits The virtual page number is hashed into a page tableThis page table contains a chain of elements hashing to the same locationEach element containsthe virtual page numberthe value of the mapped page framea pointer to the next elementVirtual page numbers are compared in this chain searching for a matchIf a match is found, the corresponding physical frame is extracted
78Hashed Page Tables (cont.) Variation for 64-bit addresses is clustered page tablesSimilar to hashed but each entry refers to several pages (such as 16) rather than 1Especially useful for sparse address spaces (where memory references are non-contiguous and scattered)
80Inverted Page TableMotivation: Rather than each process having a page table and keeping track of all possible logical pages, track all physical pagesOne entry for each real page of memoryThe entry keeps track of which (process, virtual page) is located in the page frame.Pros: tends to save lots of spaceCons: virtual-to-physical translation becomes much harderWhen process n references virtual page p, the hardware can no longer find the physical page by using p as an index into the page table.Instead, it must search the entire inverted page table for an entry (n, p).Furthermore, this search must be done on every memory reference, not just on page faults.Searching a 256K table on every memory reference is slowOther considerations:TLB and hash table (key: virtual address) is used to speed up accessesIssues implementing shared memory when using inverted page table
82Oracle SPARC SolarisConsider modern, 64-bit operating system example with tightly integrated HWGoals are efficiency, low overheadBased on hashing, but more complexTwo hash tablesOne kernel and one for all user processesEach maps memory addresses from virtual to physical memoryEach entry represents a contiguous area of mapped virtual memory,More efficient than having a separate hash-table entry for each pageEach entry has base address and span (indicating the number of pages the entry represents)
83Oracle SPARC Solaris (Cont.) TLB holds translation table entries (TTEs) for fast hardware lookupsA cache of TTEs reside in a translation storage buffer (TSB)Includes an entry per recently accessed pageVirtual address reference causes TLB searchIf miss, hardware walks the in-memory TSB looking for the TTE corresponding to the addressIf match found, the CPU copies the TSB entry into the TLB and translation completesIf no match found, kernel interrupted to search the hash tableThe kernel then creates a TTE from the appropriate hash table and stores it in the TSB, Interrupt handler returns control to the MMU, which completes the address translation.
84Example: The Intel 32 and 64-bit Architectures Dominant industry chipsPentium CPUs are 32-bit and called IA-32 architectureCurrent Intel CPUs are 64-bit and called IA-64 architectureMany variations in the chips, cover the main ideas here
85Example: The Intel IA-32 Architecture Supports both segmentation and segmentation with pagingEach segment can be 4 GBUp to 16 K segments per processDivided into two partitionsFirst partition of up to 8 K segments are private to process (kept in local descriptor table (LDT))Second partition of up to 8K segments shared among all processes (kept in global descriptor table (GDT))
86Example: The Intel IA-32 Architecture (Cont.) CPU generates logical addressSelector given to segmentation unitWhich produces linear addressesLinear address given to paging unitWhich generates physical address in main memoryPaging units form equivalent of MMUPages sizes can be 4 KB or 4 MB
87Logical to Physical Address Translation in IA-32
90Intel IA-32 Page Address Extensions 32-bit address limits led Intel to create page address extension (PAE), allowing 32-bit apps access to more than 4GB of memory spacePaging went to a 3-level schemeTop two bits refer to a page directory pointer tablePage-directory and page-table entries moved to 64-bits in sizeNet effect is increasing address space to 36 bits – 64GB of physical memory
91Intel x86-64 Current generation Intel x86 architecture 64 bits is ginormous (> 16 exabytes)In practice only implement 48 bit addressingPage sizes of 4 KB, 2 MB, 1 GBFour levels of paging hierarchyCan also use PAE so virtual addresses are 48 bits and physical addresses are 52 bits
92Example: ARM Architecture Dominant mobile platform chip (Apple iOS and Google Android devices for example)Modern, energy efficient, 32-bit CPU4 KB and 16 KB pages1 MB and 16 MB pages (termed sections)One-level paging for sections, two-level for smaller pagesTwo levels of TLBsOuter level has two micro TLBs (one data, one instruction)Inner is single main TLBFirst inner is checked, on miss outers are checked, and on miss page table walk performed by CPU