Presentation is loading. Please wait.

Presentation is loading. Please wait.

Practical, Transparent Operating System Support for Superpages J. Navarro Rice University and Universidad Católica de Chile S. Iyer, P. Druschel, A. Cox.

Similar presentations


Presentation on theme: "Practical, Transparent Operating System Support for Superpages J. Navarro Rice University and Universidad Católica de Chile S. Iyer, P. Druschel, A. Cox."— Presentation transcript:

1 Practical, Transparent Operating System Support for Superpages J. Navarro Rice University and Universidad Católica de Chile S. Iyer, P. Druschel, A. Cox Rice University

2 Paper Highlights Presents an general efficient mechanism to manage pages of different sizes in a VM system – Superpages Objective is to address the limitations of extant translation lookaside buffers (TLB).

3 The translation look aside buffer (I) Small high-speed memory –Contains a fixed number of page table entries – Content-addressable memory Entries include page frame number and page number Page frame number Bits Page number

4 The translation look aside buffer (II) Usually fully associative – Not always true (see Intel Nehalem) Considerably fewer entries than an L1 cache –Speed considerations

5 Realizations (I) TLB of ULTRA SPARC III –64-bit addresses Maximum program size is 2 44 bytes, that is, 16 TB –Supported page sizes are 4 KB, 16KB, 64 KB, 4MB ("superpages") –External L2 cache had a maximum capacity of 8 MB. Do not even attempt to memorize this!

6 Realizations (II) TLB of ULTRA SPARC III –Dual direct-mapping TLB 64 entries for code pages 64 entries for data pages –Each entry occupies 64 bits Page number and page frame number Context Valid bit, dirty bit, … Do not even attempt to memorize this!

7 Realizations (III) Intel Nehalem Architecture: – Two-level TLB: First level: –Two parts Data TLB has 64 entries for 4K pages (4K) or 32 for big pages (2M/4M) Instruction TLB has 128 entries for 4K pages and 7 for big pages. Do not even attempt to memorize this!

8 Realizations (IV) Second level: –Unified cache –Can store up to 512 entries –Operates only with 4K pages Do not even attempt to memorize this!

9 The main problem TLB sizes have not grown with sizes of main memories Define TLB coverage as amount of main memory that can be accessed without incurring TLB misses –Typically one gigabyte or less Relative TLB coverage is fraction of main memory that can be accessed without incurring TLB misses

10 Back to our examples Ultra SPARC III –with 4 KB pages: (64 + 64)×4 KB = 512 KB –with 16 KB pages: (64 + 64)×16 KB = 2 MB Do not even attempt to memorize this!

11 Back to our examples Intel Nehalem –with 4 KB pages: Level 1: –(64 + 128)×4 KB = 768 KB Level 2: –512×4 KB = 2 MB

12 Evolution of relative TLB coverage

13 Consequences Processes with very large working sets incur too many TLB misses –"Significant performance penalty" Some machines have L2 caches bigger than their TLB coverage –Can have TLB misses for data already in L2 cache

14 Solutions (I) Increase TLB size: –Would increase TLB access time –Would slow down memory accesses Increase page sizes: –Would cause increased memory fragmentation and poor utilization of main memory

15 Solutions (II) Use multiple page sizes: –Keep a relatively small "base" page size Say 4 KB –Let them coexist with much larger page sizes Superpages – Intel Nehalem solution

16 Hardware limitations (I) Superpage sizes must be supported by hardware: –4 KB, 16KB, 64 KB, 4MB for UltraSPARC III –4 KB, 2 MB and 4 MB for Intel Nehanem –Ten possible page sizes from 4KB to 256M for Intel Itanium

17 Hardware limitations (II) Superpages must be contiguous and properly aligned in both virtual and physical address spaces Single TLB entry for each superpage –All its base pages must have Same protection attributes Same clean/dirty status – Will cause problems

18 Issues and trade-offs

19 Allocation When we bring a page in main memory, we can –Put it anywhere in RAM Will need to relocate it to a suitable place when we merge it into a superpage –Put it in a location that would let us "grow" a superpage around it: reservation-based allocation Must pick a maximum size for the superpage

20 Fragmentation control The OS must keep contiguous chunks of memory availably at any time –OS will break previous reservation commitments if the superpage is unlikely to materialize –Must " treat contiguity a a potentially contended resource "

21 Promotion Once a sufficient number of base pages within a potential superpage have been allocated, the OS may elect to promote them into a superpage. This requires –Updating PTEs for all bases pages in the new superpage –Bringing the missing base pages into main memory

22 Promotion Promotion can be incremental –Progressively larger and larger superpages In use Free In use Free SuperpageIn use Free

23 Demotion OS should disband or reduce the size of a superpage whenever some portions of it fall in disuse Main problem is that OS can only track accesses at the level of the superpage

24 Eviction Not that different from expelling individual base pages –Must flush out all base pages of any superpage containing dirty pages OS cannot ascertain which base pages remain clean

25 Related approaches Many OS kernels use superpages Focus here is on application memory

26 Reservations Talluri and Hill: –propose a reservation-based scheme –reservations can be preempted –emphasis is on partial subblocks HP-UX and IRIX –Create superpages at page fault time –User must specify a preferred per segment page size

27 Page relocation Relocation-based schemes –Let base pages reside any place in main memory –Migrate these pages to a contiguous region in main memory when they find out that superpages are "likely to be beneficial." Disadvantage: cost of copying base pages Advantage: " more robust to fragmentation"

28 Hardware support Two proposals –Having multiple valid bits in each TLB entry Would allow small superpages to contain missing base pages Partial subblocking (Talluri and Hill) –Adding additional level of address translation in memory controller Would "eliminate the contiguity requirement for superpages" (Fang et al.)

29 Design

30 Allocation Use –a reservation-based scheme for superpages assumes a preferred superpage size for a given range of addresses –a buddy system to manage main memory Think of scheme used to manage block fragments in Unix FFS

31 Preferred superpage size (I) For fixed-size memory objects, pick largest aligned superpage that –Contains the faulting base page –Does not overlap with other superpages or tentative superpages –Does not extend over the boundaries of the object

32 Preferred superpage size (II) For dynamically-size memory objects, pick largest aligned superpage that –Contains the faulting base page –Does not overlap with other superpages or tentative superpages –Does not exceed the current size of the object

33 Fragmentation control Mostly managed by buddy allocator –Helped by page replacement daemon Modified BSD daemon is made "contiguity-aware"

34 Promotion Use incremental promotion Wait until superpage is fully populated Conservative approach

35 Demotion (I) Incremental demotion –Required when A base page of a superpage is expelled from main memory Protection attributes of some base pages are changed

36 Demotion (II) Speculative demotion – Could be done each time a superpage referenced bit is reset When memory becomes scarce –Let system know which parts of a superpage are still in use

37 Handling dirty superpages (I) Demote superpages as soon as they a base page modified –Otherwise would have to flush out whole superpage when it will be expelled from main memory Because there is one single dirty bit per superpage

38 Handling dirty superpages (II) A superpage has been modified –The whole superpage is dirty We break up the superpage –All other pages remain clean XX

39 Multi-list reservation scheme Maintains separate lists for each superpage size supported by the hardware, but largest one Each list contains reserved frames that could still accommodate a superpage of that size –Sorted by time of their most recent page frame allocation –Oldest entries are preempted first

40 Example Area above contains 8 page frames reserved for a possible superpage –Three frames are allocated, five are free –Breaking the reservation will free space for A superpage with 4 base pages or Two superpages with two base page each

41 Population maps One per memory object Keep track of allocated pages within each object

42 EVALUATION

43 Benchmarks Thirty-five representative programs running on an Alpha processor –Four page sizes: 8 KB, 64 KB, 512 KB and 4 MB –Fully associative TLB with 128 entries for code and 128 for data –512 MB of RAM –Separate 64 KB code and 64 KB data L1 caches –4 MB unified L2 cache

44 Results (I) Eighteen out of 35 benchmarks showed improvements over 5 percent Ten out of 35 showed improvements over 25 percent A single application showed a degradation of 1.5 percent –Allocator does not does not distinguish zeroed-out pages from other free pages

45 Results (II) Different applications benefit most from different superpage sizes –Should let system choose among multiple page sizes Contiguity-aware page replacement daemon can maintain enough contiguous regions Huge penalty for not demoting dirty superpages Overheads are small

46 CONCLUSION It works and does not require any changes to existing hardware


Download ppt "Practical, Transparent Operating System Support for Superpages J. Navarro Rice University and Universidad Católica de Chile S. Iyer, P. Druschel, A. Cox."

Similar presentations


Ads by Google