Presentation on theme: "Xiaodong Zhang The Ohio State University"— Presentation transcript:
1 Xiaodong Zhang The Ohio State University Advancement of Buffer Management Research and Development in Computer and Data SystemsXiaodong ZhangThe Ohio State University
2 Numbers Everyone Should Know (Jeff Dean, Google) L1 cache reference: nsBranch mis-predict: nsL2 cache reference: nsMutex lock/unlock: nsMain memory reference nsCompress 1K Bytes with Zippy nsSend 2K Bytes over 1 GBPS network nsRead 1 MB sequentially from memory nsRound trip within data center nsDisk seek nsRead 1MB sequentially from disk nsSend one packet from CA to Europe ns
3 Replacement Algorithms in Data Storage Management A replacement algorithm decidesWhich data entry to be evicted when the data storage is full.Objective: keep to-be-reused data, replace ones not to-be-reusedMaking a critical decision: a miss means an increasingly long delayWidely used in all memory-capable digital systemsSmall buffers: cell phone, Web browsers, boxes …Large buffers: virtual memory, I/O buffer, databases …A simple concept, but hard to optimizeMore than 40 years tireless algorithmic and system effortsLRU-like algorithms/implementations have serious limitations.
4 Least Recent Used (LRU) Replacement LRU is most commonly used replacement for data management.Blocks are ordered by an LRU order (from bottom to top)Blocks enter from the top (MRU), and leave from bottom.(LRU)Move block 2 to the top of stackThe stack is long, the bottom is the only exit.LRU stack5Recency = 2Recency – the distance from a block to the top of the LRU stackUpon a hit – move block to the top32Recency of Block 2 is its distance to the top of stackUpon a Hit to block 291
5 Least Recent Used (LRU) Replacement LRU is most commonly used replacement for data management.Blocks are ordered by an LRU order (from bottom to top)Blocks enter from the top, and leave from bottom.Load block 6 from diskDiskThe stack is long, the bottom is the only exit.66Upon a Miss to block 6LRU stack3259Recency – the distance from a block to the top of the LRU stackUpon a hit – move block to the topUpon a miss – evict block at the bottomPut block 6 on the stack topReplacement – the block 1 at the stack bottom is evicted1
6 LRU is a Classical Problem in Theory and Systems First LRU paperL. Belady, IBM System Journal, 1966Analysis of LRU algorithmsAho, Denning & Ulman, JACM, 1971Rivest, CACM, 1976Sleator & Tarjan, CACM, 1985Knuth, J. Algorithm, 1985Karp, et. al, J. Algorithms, 1991Many papers in systems and databasesASPLOS, ISCA, SIGMETRICS, SIGMOD, VLDB, USENIX…
7 The Problem of LRU: Inability to Deal with Certain Access Patterns 7The Problem of LRU: Inability to Deal with Certain Access PatternsFile ScanningOne-time accessed data evict to-be-reused data (cache pollution)A common data access pattern (50% data in NCAR accessed once)LRU stack holds them until they reach to the bottom.Loop-like accessesA loop size k+1 will miss k times for a LRU stack of kAccess with different frequencies (mixed workloads)Frequently accessed data can be replaced by infrequent ones
8 Why Flawed LRU is so Powerful in Practice What is the major flaw?The assumption of “recently used will be reused” is not always rightThis prediction is based on a simple metrics of “recency”Some are cached too long, some are evicted too early.Why it is so widely used?Works well for data accesses following LRU assumptionA simple data structure to implement
9 Challenges of Addressing the LRU Problem Two types of Efforts have been madeDetect specific access patterns: handle it case by caseLearn insights into accesses with complex algorithmsMost published papers could not be turned into realityTwo Critical GoalsFundamentally address the LRU problemRetain LRU merits: low overhead and its assumptionThe goals are achieved by a set of three papersThe LIRS algorithm (SIGMETRICS’02)Clock-pro: a system implementation (USENIX’05)BP-Wrapper: lock-contention free assurance (ICDE’09)
10 Outline The LIRS Algorithm Clock-pro BP-Wrapper How the LRU problem is fundamentally addressedHow a data structure with low complexity is builtClock-proTurn LIRS algorithm into system realityBP-Wrapperfree lock contention so that LIRS and others can be implemented without approximationWhat would we do for multicore processors?Research impact in daily computing operations
11 Recency vs Reuse Distance Recency – the distance between last reference to the current timeReuse Distance (Inter reference recency) – the distance between two consecutive reference to the block (deeper and more useful information). . .34453432598Recency = 1Recency = 2Recency is the distance from a block to the top of the LRU stackLet’s see how to quantify locality using LRU stack.LRU is the most widely used replacement algorithm.LRU stack is a data structure for its implementation, in which…LRU uses recency to quantify locality. Recency of a block is a period from the current time to its last reference.Let’s observe the recency of block 3.And the recency of block 3 is increased from 1 to 2 then to 0.The recency keeps changing from time to time.So a randomly selected recency value is hard to quantify the locality of a block.For example, if a block is regularly accessed every 4 other distinct blocks,we know its locality strength is stable.However, its recency constantly changes from 0 to 4.If we use recency to quantify locality, we will be confused and wonder which of the 5 recency values describes the locality strength?We argue that not all the recency values, but the recency at which the block is accessed should be used to quantify locality.We define the recency as IRR.When you look at the access stream, it can also be define as …LRU stack1
12 Recency vs Reuse Distance Recency – the distance between last reference to the current timeReuse Distance (Inter reference recency) – the distance between two consecutive reference to the block (deeper and more useful info)IRR = 2. . .3345345Recency = 2Recency = 053LRU Stack for HIRs3Inter-Reference Recency (IRR) The number of other unique blocks accessed between two consecutive references to the block.29LRU stack8IRR is the recency of the block being accessed last time – need an extra stack to help, increasing complexity.
13 Diverse Locality Patterns on an Access Map one-time accessesloopsLogical Block NumberLet’s have a look at an reference stream to see its locality.This is a access map for a workload trace called multi2, which wasCollected on a machine running multiple programs togetherIts X axis shows the virtual time, each time unit is for a block reference.Y axis is for logical block number. For each accessed block there is a distinct block number.Each point represents a reference to a block in the reference stream.These references exhibit various access patterns:Some blocks are very intensively repeatedly accessed; some are regularly re-accessed, something like looping;Some are just accessed once.Virtual Time (Reference Stream)strong locality
14 What Blocks does LRU Cache (measured by IRR)? Locality StrengthMULTI2LRU holds frequently accessed blocks with “absolutely” strong locality.holds one-time accessed blocks (0 locality)Likely to replace other relatively strong locality blocksIRR (Re-use Distance in Blocks)Here is the IRR map of the access stream in previously shown workload.X axis is aslo …, y axis is … or re-use distance in the access stream.Each point represents theIRR of a reference.When we use IRR to quantify locality, the lower a point is, the stronger its locality is.Using IRR map, the performance of LRU can be easily interpreted.Say the cache size is 1000 blocks, if and only if the references below this line are hits in LRU.For a workload with weak locality, which has small number of references in the low area,LRU will have a low hit ratio. In the extreme case, where all the references are above the line, the LRU hitratio would be ZERO.Cache SizeVirtual Time (Reference Stream)
15 LIRS: Only Cache Blocks with Low Reuse Distances MULTI2Locality StrengthHolds strong locality blocks (ranked by reuse distance)IRR (Re-use Distance in Blocks)Because LRU doesn’t quantify locality strength correctly,it could cache many actually weak locality blocks and make cache under-utilized.We have decide to use IRR to quantify locality strength.Then the next challenge is that: can we dynamically maintain a set of blocks with strong locality andand the size of the set equals to the cache size, so that we can cache these blocks?Suppose we have a curve to cover the blocks with the strongest locality.The number of the blocks involving in the references below the curve equals to the cache size. And these blocks are cached.The curve can adapt to the current access pattern.When the locality becomes weak, more points are in the upper area, the curve automatically climbs up.When the locality becomes strong, the curve slips down.So at any time the 1000 strongest locality blocks are selected for caching and make the cache fully utilized.Because all reference covered by the LRU line are covered by the new curve, so theoretically,the replacement algorithm caching the blocks covered the curve will have a higher hitRatio than LRU, and could be much higher with weak locality references.The LIRS algorithm we designed is the one to realize the idea.Cache SizeVirtual Time (Reference Stream)
16 Basic Ideas of LIRS (SIGMETRICS’02) LIRS: Low Inter-Reference recency SetLow IRR blocks are kept in buffer cacheHigh IRR blocks are candidates for replacementsTwo stacks are maintainedA large LRU stack contains low IRR resident blocksA small LRU stack contains high IRR blocksThe large stack also records resident/nonresident high IRR blocksIRRs are measured by the two stacksAfter a hit to a resident high IRR block in small stack, the block becomes low IRR block and goes to the large stack if it can also be found in the large stack => low IRR , otherwise, top it locallyThe low IRR block in the bottom stack will become a high IRR block and go to the small stack when the large stack is full.
17 Low Complexity of LIRSBoth recencies and IRRs are recorded in each stackThe block in the bottom of LIRS has the maximum recencyA block is low IRR if it can be found in in both stacksNo explicit comparisons and measurements are neededComplexity of LIRS = LRU = O(1) althoughAdditional object movements between two stacksPruning operations in stacks
18 Data Structure: Keep LIR Blocks in Cache Low IRR (LIR) blocks and High IRR (HIR) blocksBlock SetsPhysical CacheLIR block set(size is Llirs )LhirsLlirsCache sizeL = Llirs + LhirsHIR block set
19 LIRS Operations 5 3 2 Cache size L = 5 1 6 Llir = 3 Lhir =2 9 4 8 Initialization: All the referenced blocks are given an LIR status until LIR block set is full.We place resident HIR blocks in a small LRU Stack.Upon accessing an LIR block (a hit)Upon accessing a resident HIR block (a hit)Upon accessing a non-resident HIR block (a miss)53216948LIRS stackLRU Stack for HIRsresident in cacheLIR blockHIR blockCache sizeL = 5Llir = 3Lhir =2
20 Access an LIR Block (a Hit) . . .5975384resident in cacheLIR blockHIR blockCache sizeL = 5Llir = 3Lhir =2532169First let’s the operation on accessing an LIR block. Block 4 is accessed, move it to the top of stack S.4583
21 Access an LIR Block (a Hit) . . .597538resident in cacheLIR blockHIR blockCache sizeL = 5Llir = 3Lhir =25321694Then block 8 is accessed, move it to the top. However, HIR blocks are in the bottom. So do stack pruningto make sure a LIR block is in the bottom.583
22 Access an LIR block (a Hit) . . .597538resident in cacheLIR blockHIR blockCache sizeL = 5Llir = 3Lhir =25321486953
23 Access a Resident HIR Block (a Hit) . . .59753resident in cacheLIR blockHIR blockCache sizeL = 5Llir = 3Lhir =238453This is for accessing HIR resident blocks. Block 3 is accessed. Move it to the top, it also becomes LIR block because it was in stack S, and its new IRR is less than Rmax. Accordingly LIR block 1 is demoted as HIR block and enters stack Q. Do stack pruning.2513
24 Access a Resident HIR Block (a Hit) . . .59753resident in cacheLIR blockHIR blockCache sizeL = 5Llir = 3Lhir =25483251
25 . . . 5 9 7 5 3 Access a Resident HIR Block (a Hit) Cache size L = 5 resident in cacheLIR blockHIR blockCache sizeL = 5Llir = 3Lhir =248325511
26 . . . 5 9 7 5 Access a Resident HIR Block (a Hit) resident in cacheLIR blockHIR blockCache sizeL = 5Llir = 3Lhir =23Then 5 is accessed, it remains as HIR block because it was not in stack S.81545
27 Access a Non-Resident HIR block (a Miss) . . .597resident in cacheLIR blockHIR blockCache sizeL = 5Llir = 3Lhir =2753This is for accessing non-resident HIR blocks. Block 7 is accessed. It is a miss. We need a free buffer.Bottom block of stack Q is replaced. Move it to the top of stack S as HIR block as well as Q.87541
28 Access a Non-Resident HIR block (a Miss) . . .59resident in cacheLIR blockHIR blockCache sizeL = 5Llir = 3Lhir =297553Block 9 is accessed. This time bottom block of stack Q, 5 is replaced and leaves stack Q. But it remains in LIRS stack as non-resident HIR block.897455
29 Access a Non-Resident HIR block (a Miss) . . .5resident in cacheLIR blockHIR blockCache sizeL = 5Llir = 3Lhir =2597753Then block 5 is accessed, it becomes LIR block because it was in stack S.We can see that the overhead of the operations on the stacks in the LIRS algorithm is almost as low as that of LRU.894477
30 Access a Non-Resident HIR block (a Miss) . . .resident in cacheLIR blockHIR blockCache sizeL = 5Llir = 3Lhir =25973Then block 5 is accessed, it becomes LIR block because it was in stack S.We can see that the overhead of the operations on the stacks in the LIRS algorithm is almost as low as that of LRU.849
31 A Simplified Finite Automata for LRU Upon a block accessHit (place it on the top)Miss (fetch data and place on the top)Operations on LRU stackblock evicts (remove the block in the bottom)
32 A Simplified Finite Automata for LIRS Upon a block accessBlock evictsOperations on HIR stackDemotion to HIR stackMiss (with a record)HitOperations on LIR stackHit/promotion to LIR stackOperations on HIR stackMiss/promotion to LIR stackPruningHitMiss (no record)Operations on HIR stackAdd a record on resident HIRMissBlock evicts
33 Operations on HIR stack Operations on HIR stack A Simplified Finite Automata for LIRSUpon a block accessOperations on HIR stackDemotion to HIR stackMiss (with a record)HitOperations on LIR stackHit/promotion to LIR stackOperations on HIR stackMiss/promotion to LIR stackPruningHitMiss (no record)Operations on HIR stackAdd a record on resident HIRMissBlock evicts
34 How LIRS addresses the LRU problem File scanning: one-time access blocks will be replaced timely; (due to their high IRRs)Loop-like accesses: a section of loop data will be protected in low IRR stack; (misses only happen in the high IRR stack)Accesses with distinct frequencies: Frequently accessed blocks in short reuse distance will NOT be replaced. (dynamic status changes)
35 Performance Evaluation Trace-driven simulation on different patterns showsLIRS outperforms existing replacement algorithms in almost all the cases.The performance of LIRS is not sensitive to its only parameter Lhirs.Performance is not affected even when LIRS stack size is bounded.The time/space overhead is as low as LRU.LRU is a special case of LIRS (without recording resident and non-resident HIR blocks, in the large stack).
37 Looping Pattern: postgres (IRR Map) LIRSIRR (Re-use Distance in Blocks)LRUThis is its IRR map. The size of LRU stack and LIRS stack for 500 block cache are also shown.While LRU only covers the references below its stack size line,LIRS adaptively change its stack size to cover current locality-strong blocks.When the locality becomes weak, i.e more references are in the upper IRR area,Its size goes up, otherwise, it goes down.This adaptation attempts to make the 500 blocks with the currently strongest locality be identified and cached.Virtual Time (Reference Stream)
39 Two Technical Issues to Turn it into Reality High overhead in implementationsFor each data access, a set of operations defined in replacement algorithms (e.g. LRU or LIRS) are performedThis is not affordable to any systems, e.g. OS, buffer caches …An approximation with reduced operations is required in practiceHigh lock contention costFor concurrent accesses, the stack(s) need to be locked for each operationLock contention limits the scalability of the systemClock-pro and BP-Wrapper addressed these two issues
40 Only Approximations can be Implemented in OS The dynamic changes in LRU and LIRS cause some computing overhead, thus OS kernels cannot directly adopt them.An approximation reduce overhead at the cost of lower accuracy.The clock algorithm for LRU approximation was first implemented in the Multics system in 1968 at MIT by Corbato (1990 Turing Award Laureate)Objective: LIRS approximation for OS kernels.
41 Basic Operations of CLOCK Replacement No algorithm operations All the resident pages are placed around a circular list, like a clock;Each page is associated with a reference bit, indicating if the page has been accessed.11On a block HIT11CLOCK handUpon a HIT:Set reference bit to 1No algorithm operations1Then LRU is approximated by the CLOCK replacement in VM.In CLOCK, All the resident pages are placed around the a circular list, like a clock.There is a clock hand turning in the clockwise direction to search victim pages.Each page is associated with a reference bit, indicating if the page has been accessed.On a hit access, the bit is set automatically by hardware, there are no algorithm operations.1111
42 On a sequence of two MISSes Basic CLOCK ReplacementStarts from the currently pointed page, and evicts the page if it is”`0”;Move the clock hand until reach a “0” page;Give “1” page a second chance, and reset its “1” to “0”Insert a new block hereReset reference bit to 01Upon the second MISS:Evict the block w/ reference bit 01On a sequence of two MISSes1Upon a MISS:Evict the block w/ reference bit 01CLOCK hand1On a miss access, CLOCK turns the clock hand, evict the first “0” page;CLOCK inserts the missed page at the head, initialize its reference bit to 0.If the ref-bit of page pointed by the hand is “1”, CLOCK gives it a second chance without replacing it,And reset its “1” to “0” and continue to look for a page with its bit of “0”CLOCK simulates LRU replacement very well, and its hit ratios are very close to LRU.1111
43 Unbalanced R&D on LRU versus CLOCK LRU related workCLOCK related workFBR (1990, SIGMETRICS)LRU-2 (1993, SIGMOD)2Q (1994, VLDB)SEQ (1997, SIGMETRICS)LRFU (1999, OSDI)EELRU (1999, SIGMETRICS)MQ (2001, USENIX)LIRS (2002, SIGMETRICS)ARC (2003, FAST, IBM patent)GCLOCK (1978, ACM TDBS)1968, Corbato2003, CorbatoCAR (2004, FAST, IBM patent)CLOCK-Pro (2005, USENIX)Now let’s see the prior research work on LRU and CLOCK.Because the importance of replacement algorithms and the well-known LRU performance problems, there are a large number of new algorithms proposed for better performance, but almost all of them target at LRU.See the long list over the decades. However, the list for CLOCK is much shorter. See, from 1968 to 1998, it is a really a long time for a person! Only GCLOCK and several CLOCK variants are proposed during the long time.The very stringent low cost requirement poses a big challenge on inventing new VM replacement.Recently there is a VM replacement algorithm called CAR proposed by the researchers at IBM.And we proposed the CLOCK-Pr n pljn.o.
44 Basic Ideas of CLOCK-Pro It is an approximation of LIRS based on the CLOCK infrastructure.Pages categorized into two groups: cold pages and hot pages based on their reuse distances (or IRR).There are three hands: Hand-hot for hot pages,Hand-cold for cold pages, and Hand-test for running a reuse distance test for a block;The allocation of memory pages between hot pages (Mhot) and cold pages (Mcold ) are adaptively adjusted. (M = Mhot + Mcold)All hot pages are resident (=Lir blocks), some cold pages are also resident (= Hir Blocks); keep track of recently replaced pages (=non-resident Hir blocks)Now let’s see CLOCK-Pro, which uses the same principle as that of LIRS…
45 Cold resident1HotCLOCK-Pro (USENIX’05)1124Cold non-residentHand-hot: find a hot page to be demoted into a cold page.22332214215hand-hot206119hand-coldHand-cold is used to find a page for replacement.7hand-test18All hands move in the clockwise direction.81716I would like to use the figure to illustrate CLOCK-Pro.There are several kinds of pages on the clock. ….I associate a test period to a cold page to test its reuse distance.Hand-test: (1) to determine if a cold page is promoted to be hot; (2) remove non-resident cold pages out of the clock.910151Two reasons for a resident cold page:A fresh replacement: a first access.It is demoted from a hot page.11141213
48 Concurrency Management in Buffer Management 48PagesPage AccessesConcurrent accesses to buffer cachesA critical section is neededBuffer cache (pool) keeps hot pagesMaximizing hit ratio is the keyLock (Latch)Replacement Management inside LockMaximizing hit ratioBuffer Pool (in DRAM)Hit ratio is largely determined by effectiveness of replacement algorithmIt determines which pages to be kept and which to be evictedLRU-k, 2Q, LIRS, ARC, …Lock (latch) is required to serialize the update after each page requestHard Disk
49 Accurate Algorithms and Their Approximations 49ApproximationsCLOCK (LRU), CLOCK-Pro (LIRS), CAR (ARC)LRU, LIRS, ARC, ….111CLOCK hand…11…11clock sets bit to 1 without lock for a page hit.Lock synchronization is only used only for misses.Clock approximation reduces lock contention at the price of reducing hit ratios.
50 History of Buffer Pool's Caching Management in PostgreSQL : LRU (suffer lock contention moderately due to low concurrency): LRU-k (hit ratio outperforms LRU, but lock contention became more serious)2004: ARC/CAR are implemented, but quickly removed due to an IBM patent protection.2005: 2Q was implemented (hit ratios were further improved, but lock contention was high)2006 to now: CLOCK (approximation of LRU, lock contention is reduced, but hit ratio is the lowest compared with all the previous ones)
51 Trade-offs between Hit Ratios and Low Lock Contention LRU-k, 2Q, LIRS, ARC, SEQ, ….……for high hit ratioCLOCK, CLOCK-Pro, and CARfor high scalability?Our Goal: to have both!Lock Synchronizationmodify data structuresUpdate page metadataLow Lock SynchronizationClock-based approximations lower hit ratios (compared to original ones).The transformation can be difficult and demand great efforts;Some algorithms do not have clock-based approximations.5151
52 Reducing Lock Contention by Batching Requests Replacement Algorithm (modify data structures, etc. )Buffer PoolReplacement Algorithm (modify data structures, etc. )Buffer PoolOne batch queue per threadPage hitCommit assess history for a set of replacement operationsFulfill page requestFetch the page directly.5252
53 Reducing Lock Holding Time by Prefetching Thread 2Data Cache Miss StallThread 1TimeThread 2Pre-read data that will be accessed in the critical sectionRead the data that would be accessed in the critical section by the replacement algorithm immediately before a lock is requested.Thread 1Time5353
54 Lock Contention Reduction by BP-Wrapper (ICDE’09) Lock contention: a lock cannot be obtained without blocking;Number of lock acquisitions (contention) per million page accesses.Reduced by over 7000 times!5454
55 Impact of LIRS in Academic Community LIRS is a benchmark to compare replacement algorithmsReuse distance is first used in replacement algorithm designA paper in SIGMETRICS’05 confirmed that LIRS outperforms all the other replacement.LIRS has become a topic to teach in both graduate and undergraduate classes of OS, performance evaluation, and databases at many US universities.The LIRS paper (SIGMETRICS’02) is highly and continuously cited.Linux Memory Management group has established an Internet Forum on Advanced Replacement for LIRS
56 LIRS has been adopted in MySQL MySQL is the most widely used relational database11 million installations in the worldThe busiest Internet services use MySQL to maintain their databases for high volume Web sites: google, YouTube, wikipedia, facebook, Taobao…LIRS is managing the buffer pool of MySQLThe adoption is the most recent version (5.1), November 2008.
59 Infinispan (a Java-based Open Software) The data grid forms a huge in-memory cache being managed using LIRSBP-Wrapper is used to ensure lock-free
60 Concurrentlinkedhashmap as a Software Cache Linked list structure (a Java class)Elements are Linked and managed using LIRS replacement policy. BP-Wrapper ensures lock contention-free
61 LIRS in Management of Big Data LIRS has been adopted in GridGain SoftwareA Java based open source middle ware for real time big data processing and analytics (www.gridgain.com)LIRS makes replacement decisions for the in-memory data gridOver 500 products and organizations using GridGain software daily: Sony, Cisco, Canon, JobsonJonson, Deutsche Bank, ….LIRS has been adopted in SYSTAP’s storage managementBig data scale-out storage systems (www.bigdata.com)
62 LIRS in Functional Programming Language: Clojure Clojure is a dynamic programming language that targets Java Virtual Machine (http://clojure.org)A dialect of Lisp, functional programming, and designed for concurrencyHave be used by many organizationsLIRS is a member of the clojure library: LIRSCache
63 LIRS Principle in Hardware Caches A cache replacement hardware implementation based on Re-Reference Interval prediction (RRIP)Presented in ISCA’10 by IntelTwo bits are added to each cache line to measure reuse-distance in a static and dynamic wayPerformance gains are up to 4-10%Hardware cost may not be affordable in practice.
64 Impact of Clock-Pro in OS and Other Systems Clock-pro has been adopted in FreeBSD/NetBSD (open source Unix)Two patches in Linux kernel for usersClock-pro patches in by Rik van RielPeterZClockPro2 in by Peter ZijlstraClock-pro is patched in Aparche Derby (a relational DB)Clock-pro is patched in OpenLDAP (directory accesses)
65 Impact of Multicore Procesors in Computer Systems 8MB L3 Cache8GB MemoryL1L2DiskDell Precision1500in 2009 with similar price256MBMemoryDell Precision GX620Purchased in 2004L12MB L2Disk8KB L1D+12KBL1I+512KBL2
66 Performance Issues w/ the Multicore Architecture Slow data accesses to memory and disks continue to be major bottlenecks. Almost all the CPUs in Top-500 Supercomputers are multicores.8MB L3 Cache8GB MemoryL1L2DiskDell Precision 1500Purchased in 2009 with similar priceCache Contention and Pollution:Conflict cache misses among multi-threads can significantly degrade performance.Memory Bus Congestion:Bandwidth is limited to as the number of cores increases8KB L1D+12KBL1I+512KBL2“Disk Wall”:Data-intensive applications also demand high throughput from disks.
67 Throughput = Concurrency/Latency Multi-core Cannot Deliver Expected Performance as It ScalesPerformanceRealityIdealThroughput = Concurrency/LatencyExploiting parallelismExploiting localitySandia National Laboratories“The Troubles with Multicores”, David Patterson, IEEE Spectrum, July, 2010“Finding the Door in the Memory Wall”, Erik Hagersten, HPCwire, Mar, 2009“Multicore Is Bad News For Supercomputers”, Samuel K. Moore, IEEE Spectrum, Nov, 2008
68 Challenges of Managing LLC in Multi-cores Recent theoretical results about LLC in multicoresSingle core: optimal offline LRU algorithm existsOnline LRU is k-competitive (k is the cache size)Multicore: finding an offline optimal LRU is NP-completeCache partitioning for threads: an optimal solution in theorySystem Challenges in practiceLLC lacks necessary hardware mechanism to control inter-thread cache contentionLLC share the same design with single-core cachesSystem software has limited information and methods to effectively control cache contention
69 OS Cache Partitioning in Multi-cores (HPCA’08) Physically indexed caches are divided into multiple regions (colors).All cache lines in a physical page are cached in one of those regions (colors).Physically indexed cacheVirtual addressvirtual page numberpage offsetOS controlAddress translation… …Physical addressphysical page numberPage offsetLet me introduce page coloring technique first.1. A virtual address can be divide into two parts: Virtual page number and page offset(In our experimental system, the page size is 4KB, so length of page offset bits are 12)2. The virtual address is converted to physical address by address translation controlled by OS.Virtual page number is mapped to a physical page number and page offset remain same.3. The physical address is used in a physically indexed cache. The cache address is divided into tag, set index and block offset.4. The common bits between physical page number and set index are referred as page color bits.5. The physically indexed cache is then divided into multiple regions. Each OS page will cached in one of those regions. Indexed by the page color.In a summary, 1) 2)…OS can control the page color of a virtual page through address mapping(by selecting a physical page with a specific value in its page color bits).=Cache addressCache tagSet indexBlock offsetpage color bits69
70 Shared LLC can be partitioned into multiple regions 70Shared LLC can be partitioned into multiple regionsPhysical pages are grouped to differentbins based on their page colorsOS address mappingPhysically indexed cache1234…………iShared cache is partitioned between two processes through OSaddress mapping.i+1i+2………Process 1… …1. This slide shows an example how we enhance page coloring to partition shared cache between two processes.2. Physical pages are grouped into page bins according to their page color. The pages in a same bin will be cached in same cache region or cache color in a physically indexed cache.3. Suppose there are two processes running on a dual-core processor. Each processor has a set of pages need to be mapped to physical pages.4. If we limit that pages from one process can only be mapped to a subset of page bins. Then pages from a process will only use part of the cache.In a summary,… We call this restriction co-partitioning....1234……Main memory space needs to be partitioned too (co-partitioning).……ii+1i+2…………Process 2
71 Implementations in Linux and its Impact 71Implementations in Linux and its ImpactStatic partitioningPredetermines the amount of cache blocks allocated to each running process at the beginning of its executionDynamic cache partitioningAdjusts cache allocations among processes dynamicallyDynamically changes processes’ cache usage through OS page address re-mapping (page re-coloring)Current Status of the system facilityOpen source in Linux kernelsadopted as a software solution in Intel SSG in May 2010used in applications of Intel platforms, e.g. automationAs I discussed earlier, We support both static and dynamic cache partitioning.The static cache partitioning mechanism predetermines the amount of cache blocks allocated to each program at the beginning of its execution.We implement the static cache portioning by enhancing page coloring. The goal of static cache partitioning is to divide shared cache to multiple regions and partition cache regions among processes statically through OS page address mappingThe dynamic cache partitioning adjusts cache quota among process dynamically.The dynamic cache partitioning is supported by page re-coloring.The goal of dynamic cache partitioning is to coordinate dynamic behaviors of the programs by changing processes’ cache usage dynamically through OS page address re-mapping.
72 Final Remarks: Why LIRS-related Efforts Make the Difference? Caching the most deserved data blocksUsing reuse-distance as the ruler, approaching to the optimal2Q, LRU-k, ARC, and others can still cache non-deserved blocksLIRS with its two-stack yields constant operations: O(1)Consistent to LRU, but recording much more useful informationClock-pro turns LIRS into reality in production systemsNone of other algorithms except ARC have approximation versionsBP-Wrapper ensures lock contention free in DBMSOS partitioning executes LIRS principle in LLC in multicoresProtect strong locality data, and control weak locality data
73 Acknowledgement to Co-authors and Sponsors 73Song JiangPh.D.’04 at William and Mary, faculty at Wayne StateFeng ChenPh.D.’10 , Intel Labs (Oregon)Xiaoning DingPh.D. ‘10 , Intel Labs (Pittsburgh)Qingda LuPh.D.’09 , Intel (Oregon)Jiang LinPh.D’08 at Iowa State, AMDZhao ZhangPh.D.’02 at William and Mary, faculty at Iowa StateP. Sadayappan, Ohio StateContinuous support from the National Science Foundation
74 CSE 788: Winter Quarter 201174Principle of Locality in Design and Implementation of Computer and Distributed SystemsExploiting locality at different levels of computer systemsChallenges of algorithms design and implantationsReadings of both classical and new papersA proposals- and projects-based classMany high quality research started from this class, and published in FAST, HPCA, Micro, PODC, PACT, SIGMETRICS, USENIX, and VLDBYou are welcome to take the class next quarter
75 : email@example.com Xiaodong Zhang:Thank You !
Your consent to our cookies if you continue to use this website.