Presentation is loading. Please wait.

Presentation is loading. Please wait.

Locality-Conscious Lock-Free Linked Lists Anastasia Braginsky & Erez Petrank 1.

Similar presentations


Presentation on theme: "Locality-Conscious Lock-Free Linked Lists Anastasia Braginsky & Erez Petrank 1."— Presentation transcript:

1 Locality-Conscious Lock-Free Linked Lists Anastasia Braginsky & Erez Petrank 1

2 Lock-Free Locality-Conscious Linked Lists List of constant size ''containers", with minimal and maximal bounds on the number of elements in container Traverse the list quickly to the relevant container Lock-free, locality-conscious, fast access, scalable

3 Non-blocking Algorithms Ensures progress in finite number of steps. A non-blocking algorithm is: ◦wait-free if there is a guaranteed per-thread progress in bounded number of steps ◦lock-free if there is a guaranteed system-wide progress in bounded number of steps ◦obstruction-free if a single thread executing in isolation for a bounded number of steps will make progress. 3

4 Existing Lock-Free Lists Designs J. D. VALOIS, Lock-free linked lists using compare-and-swap, in Proc. PODC, T.L. HARRIS, A pragmatic implementation of non- blocking linked-lists, in DISC M.M. MICHAEL, Hazard Pointers: Safe Memory Reclamation for Lock-Free Objects, in IEEE M. FORMITCHEV, and E. RUPERT. Lock-free linked lists and skip lists, in Proc. PODC,

5 Outline Introduction A list of memory chunks Design of in-chunk list Merges & Splits via freezing Empirical results Summary 5

6 The List Structure A list consists of ◦A list of memory chunks ◦A list in each chunk (chunk implementation) When a chunk gets too sparse or dense, the update operations on the list are stopped and the chunk is split or merged with its preceding chunk. 6

7 An Example of a List of Fixed-Sized Memory Chunks Chunk A HEAD NextChunk Chunk B NextChunk NULL Key: 3 Data: G Key: 14 Data: K Key: 25 Data: A Key: 67 Data: D Key: 89 Data: M EntriesHead 7

8 When No More Space for Insertion Chunk A HEAD NextChunk Chunk B NextChunk Key: 3 Data: G Key: 6 Data: B Key: 9 Data: C Key: 14 Data: K Key: 25 Data: A Key: 67 Data: D Key: 89 Data: M EntriesHead Key: 12 Data: H Freeze 8 NULL

9 Split Chunk A HEAD NextChunk Chunk B NextChunk Key: 3 Data: G Key: 6 Data: B Key: 9 Data: C Key: 14 Data: K Key: 25 Data: A Key: 67 Data: D Key: 89 Data: M EntriesHead Key: 12 Data: H Freeze Chunk C NextChunk Key: 3 Data: G Key: 9 Data: C EntriesHead Key: 6 Data: B Chunk D NextChunk Key: 12 Data: H EntriesHead Key: 14 Data: K 9 NULL

10 Split Chunk A HEAD NextChunk Chunk B NextChunk Key: 3 Data: G Key: 6 Data: B Key: 9 Data: C Key: 14 Data: K Key: 25 Data: A Key: 67 Data: D Key: 89 Data: M EntriesHead Key: 12 Data: H Freeze Chunk C NextChunk Key: 3 Data: G Key: 9 Data: C EntriesHead Key: 6 Data: B Chunk D NextChunk Key: 12 Data: H EntriesHead Key: 14 Data: K 10 NULL

11 When a Chunk Gets Sparse HEAD Chunk B NextChunk Key: 25 Data: A Key: 67 Data: D Key: 89 Data: M EntriesHead Chunk C NextChunk Key: 3 Data: G Key: 9 Data: C EntriesHead Key: 6 Data: B Chunk D NextChunk EntriesHead Key: 14 Data: K Freeze master Freeze slave 11 NULL

12 Merge HEAD Chunk B NextChunk Key: 25 Data: A Key: 67 Data: D Key: 89 Data: M EntriesHead Chunk C NextChunk Key: 3 Data: G Key: 9 Data: C EntriesHead Key: 6 Data: B Chunk D NextChunk EntriesHead Key: 14 Data: K Freeze master Freeze slave Chunk E NextChunk Key: 3 Data: G Key: 6 Data: B Key: 9 Data: C Key: 14 Data: K EntriesHead 12 NULL

13 Merge HEAD Chunk B NextChunk Key: 25 Data: A Key: 67 Data: D Key: 89 Data: M EntriesHead Chunk C NextChunk Key: 3 Data: G Key: 9 Data: C EntriesHead Key: 6 Data: B Chunk D NextChunk EntriesHead Key: 14 Data: K Freeze master Freeze slave 13 Chunk E NextChunk Key: 3 Data: G Key: 6 Data: B Key: 9 Data: C Key: 14 Data: K EntriesHead NULL

14 Outline Introduction A list of memory chunks Design of in-chunk list Merges & Splits via freezing Empirical results Summary 14

15 A List of Fixed-Sized Memory Chunks Chunk A HEAD NextChunk Chunk B NextChunk NULL Key: 3 Data: G Key: 14 Data: K Key: 25 Data: A Key: 67 Data: D Key: 89 Data: M EntriesHead 15

16 The Structure of an Entry 2 machine words Freeze bit: to mark chunk entries frozen. A ┴ (bottom) value is not allowed as a key value. It means that entry is not allocated. DataKey Freeze bit Next entry pointer 32 bit31 bit Delete bit Freeze bit 62 bit KeyData wordNextEntry word 16

17 The Structure of a Chunk Key: ┴ Key: 7 Data: 89 Head: dummy entry Key: 14 Data: 9 Key: ┴ Key: 22 Data: 13 Key: ┴ Key: 23 Data: 53 Deleted bit: 1 Key: 11 Data: 13 Counter: 4 Key: 24 Data: 78 Deleted bit: 1 NextChunk pointer new pointer Merge Buddy pointer Freeze State 2 bits An array of entries of size MAX 17

18 Initiating a Freeze When a process p realizes that ◦A chunk is full, or ◦A chunk is sparse, or ◦A chunk is in progress of being frozen, Then p starts a freeze or p helps another process that has already started a freeze. 18

19 The Freeze Process Starts by: Going over all the entries in the array and setting their freeze bit Finish ◦insertions of all currently allocated entries that are not yet in the list ◦deletions of entries already marked as deleted but still in the list 19

20 Chunk List is Different from Known Lock-Free Linked Lists Non-private insertion: entry is visible when allocated, even before linking to the list. Allow help with insertion. Boundary conditions causing merges and splits. 20

21 Entry Allocation 1. Entry is allocated at the beginning of the insertion process 2. Find zeroed entry, with ┴ key value 3. Allocate by swapping the KeyData word to the desired value. ◦Upon a failure of the CAS command, goto 2. ◦Frozen entry can not be allocated 4. If no entry is found -- freeze starts Next, use allocated entry for list insertion… 21 k:3 d:9 f:1 k:4 d:2 f:1 k:8 d:5 f:0 k: ┴ d:0 f:1 k: ┴ d:0 f:0

22 Entry Allocation 1. Entry is allocated at the beginning of the insertion process 2. Find zeroed entry, with ┴ key value 3. Allocate by swapping the KeyData word to the desired value. ◦Upon a failure of the CAS command, goto 2. ◦Frozen entry can not be allocated 4. If no entry is found -- freeze starts Next, use allocated entry for list insertion… 22 k:3 d:9 f:1 k:4 d:2 f:1 k:8 d:5 f:0 k: ┴ d:0 f:1 k:6 d:2 f:0

23 Insertion Algorithm 1. Record entry’s next pointer value in savedNext. 2. Find a location for adding the new entry. ◦If key already exists (in a different entry) – free allocated entry by clearing it and return. 3. CAS entry’s next pointer from savedNext to the next entry in the list 4. CAS previous entry’s next pointer to newly allocated entry ◦If any CAS fails, goto 1 (restarting from the beginning of a chunk) 5. Increase the counter and return 23 k:3 d:9 f:1 k:4 d:2 f:1 k:8 d:5 f:0 k: ┴ d:0 f:1 k:6 d:2 f:0 previousnext

24 Insertion Algorithm 1. Record entry’s next pointer value in savedNext. 2. Find a location for adding the new entry. ◦If key already exists (in a different entry) – free allocated entry by clearing it and return. 3. CAS entry’s next pointer from savedNext to the next entry in the list 4. CAS previous entry’s next pointer to newly allocated entry ◦If any CAS fails, goto 1 (restarting from the beginning of a chunk) 5. Increase the counter and return 24 k:3 d:9 f:1 k:4 d:2 f:1 k:8 d:5 f:0 k: ┴ d:0 f:1 k:6 d:2 f:0 previousnext

25 Insertion Algorithm 1. Record entry’s next pointer value in savedNext. 2. Find a location for adding the new entry. ◦If key already exists (in a different entry) – free allocated entry by clearing it and return. 3. CAS entry’s next pointer from savedNext to the next entry in the list 4. CAS previous entry’s next pointer to newly allocated entry ◦If any CAS fails, goto 1 (restarting from the beginning of a chunk) 5. Increase the counter and return 25 k:3 d:9 f:1 k:4 d:2 f:1 k:8 d:5 f:0 k: ┴ d:0 f:1 k:6 d:2 f:0 previousnext

26 Deletion Standard implementation, except for taking care not to get under the minimum number of entries Counter always holds a lower bound on the actual number of entries. ◦ increased after actual insert ◦ decreased before actual delete Decrementing the counter below the minimum allowed number, initiates a freeze Frozen entry can not be marked as deleted 26

27 Deletion Algorithm Decrement counter ◦ If it requires going below MIN, start freeze Find the entry ◦ If not found, increase counter and return Mark the entry’s next pointer as deleted ◦ If entry is frozen, start freeze Disconnect the entry from the list 27

28 Outline Introduction A list of memory chunks Design of in-chunk list Merges & Splits via freezing Empirical results Summary 28

29 Freezing Phase I: Marking entries with frozen bits ◦Non-frozen entries can still change concurrently Phase II: List stabilization ◦Everything frozen, now finish all incomplete operations. Phase III: Decision ◦Split, merge, or copy. Phase IV: Recovery ◦Implementation of the above decision 29

30 Phase IV - Recovery Allocate new chunk or chunks locally Copy the frozen data to the new chunk Execute the operation that initially caused the freeze Attach the new chunk to the frozen one Replace frozen chunk(s) with new chunk(s) in the entire List’s data structure 30

31 Remarks Search can run on a frozen chunk (and is not delayed). ◦Wait-free except for the use of the hazard pointer mechanism A chunk can never be unfrozen 31

32 Outline Introduction A list of memory chunks Design of in-chunk list Merges & Splits via freezing Empirical results Summary 32

33 The Test Environment Platform: SUN FIRE with UltraSPARC T1 8-core processor, each core running 4 hyper-threads. OS: Solaris 10 Chunk size set to virtual page size -- 8KB. ◦All accesses inside a chunk are on the same page 33

34 Workload Each test had two stages: ◦Stage I:  Insertions (only) of N random keys (in order to obtain a substantial list)  N: 10 3, 10 4, 10 5, 10 6 ◦Stage II:  Insertions, deletions and searches in parallel  N operations overall out of which 15% insertions, 15% deletions, and 70% searches. Reporting results for runs of 32 concurrent threads. 34

35 Reference for Comparison Michael’s lock-free linked list implemented in C according to the pseudo-code from ◦MICHAEL, M. M., Hazard Pointers: Safe Memory Reclamation for Lock-Free Objects., in IEEE ◦Uses hazard pointers. A Java implementation of the lock-free linked list provided in the book “The Art of Multiprocessor Programming” ◦Garbage collection is assumed. 35

36 Comparison with Michael’s List Total Time 36 Already at we get same performance More then 10 times faster Constantly better performance. For substantial lists in more then 10 times Constantly better performance. For substantial lists in more then 10 times

37 Comparison with Michael’s List Single Operation Average 37 Better performance, as lists are going more substantial Again constantly better performance

38 Comparison with Lock-Free List in Java Total Times 38

39 Comparison with Lock-Free List in Java Single Operation Average 39

40 Outline Introduction A list of memory chunks Design of in-chunk list Merges & Splits via freezing Empirical results Summary 40

41 Conclusion New lock-free algorithm for chunked linked list Fast due to: ◦ Skips over chunks ◦ Restarting from the beginning of a chunk ◦ Locality-conscious May be useful for other structures that can use the chunks Good empirical results for the substantial lists 41

42 Questions? 42

43 Thank you !! 43


Download ppt "Locality-Conscious Lock-Free Linked Lists Anastasia Braginsky & Erez Petrank 1."

Similar presentations


Ads by Google