Presentation is loading. Please wait.

Presentation is loading. Please wait.

A non-blocking approach on GPU dynamical memory management Joy NVIDIA.

Similar presentations


Presentation on theme: "A non-blocking approach on GPU dynamical memory management Joy NVIDIA."— Presentation transcript:

1 A non-blocking approach on GPU dynamical memory management Joy NVIDIA

2 Outline Introduce Buddy memory system Our parallel implementation Performance comparison Discussion

3 Fixed size memory (memory pool) Ever fastest & simplest memory system Free list (item = address) Each item of free list records the available address to allocate. Free list can be implement with queue, stack, list, or any data structure. Allocate Just take one item from free list Free Just return the address to free list. Performance Constant time on both allocation & free Free list 0x0000 0x0100 0x0200 0x0300 ….

4 Multi-lists memory For management on non-fixed size memory system, a natural extension from fixed size memory is multi-lists memory system Free list multi free lists of fixed size memory with different size (ex: twice size grow) Allocate Find the first free list with size larger than request size by arithmetic operation example: ceil(log2(size)) Take one element from the target free list Free Find the correct free list to free Return the address to the target free list. Performance Constant time on both allocation & free, since it is possible to find suitable free list with arithmetic operation instead of linear searching. Drawback: waste memory Free lists Size = 256 Size = 512 Size = 1024 Size = 2048 …. … … … … …

5 Buddy memory To avoid the wasting memory problem in multi-lists memory, it is natural to allocate memory from the direct upper layers (twice size) when the free list is empty, instead of pre-allocated memory in all free lists. Free list multi free lists of fixed size memory, with sizes growing up in power of 2 Allocate Find the first free list with size larger than request size Take one element from the target free list If the free list is empty, create pairs from upper list Free Find the correct free list to free (using records) Return the address to the target free list. If the buddy is also in the free list, then free to upper. Performance Constant time on both allocation & free Free lists Size = 256 Size = 512 Size = 1024 Size = 2048 Size = 4096 …

6 Buddy memory Good internal de-fragment The buddy address can be calculated by address XOR size Constant time operation O(h), where h = log2(max size/min size) is a constant. buddythis

7 Memory layers Just implement one class of single layer, other layers are instances with different size. Lower layer The memory layer with 1/2 size of current layer Current layer The allocating request layer Upper layer The memory layer with 2x size of current layer Lower layer Upper layer Current layer Free lists Size = 256 Size = 512 Size = 1024 Size = 2048 Size = 4096 …

8 Pair creation If the current free list is empty, it will allocate memory from upper allocator. Since the size of upper is 2x, it will create a pair of available memory into current free list. If there are N threads simultaneously allocate memory in current layer, of that the free list is empty, only N/2 threads shall allocate memory from upper layer. Memory from upper layer Memory to current layer

9 Free Queue The free list was implemented with queue, of which head can run over tail. HeadTailunder available (require pair creation from upper layer) Use the above states to determine which threads shall call pair_creation() from upper layer.

10 Parallel strategy (Alloc) Each allocation requestor creates a socket to listen the address. The socket was implemented on free queue. atomicAdd(&head,1) creates a socket. The output address can come from current free list or pair creation from upper free list. Head Tail Available memory in free queue Need pair creation from upper layer New Head Threads with allocation requests to this layer

11 Odd/Even Pair Creation The under available threads will perform pair creations in odd/even loop until new tail >= new head to avoid the overhead of simultaneous pair creation. Head TailNew Head Threads with allocation requests to this layer New Tail Pair Creations

12 Parallel strategy (Free) Store the freed address to free list Calculate the buddy address. XOR(addr, size) Check if the buddy is already in the free list. Use hand shake algorithm for fast lookup If YES, mark both elements in free list as N/A, then free the memory block into upper layer.

13 Hand shake The freed memory record its index in free list The free list record the freed memory address Fast check if buddy memory address is in free list Calculate buddy memory address (XOR) Read the index from this address Check if the address of this index in free list is equal to the buddy memory address. Memory block Record index in free list Record address of memory

14 Performance gridDim=512 blockDim=512 K20 CUDA 5.0ThisSpeedup 256 bytes alloc/free single time ms10.8 ms25.8 x 256 bytes alloc ms10.48 ms682 x 256 bytes free ms7.27 ms780 x Random # of bytes alloc/free 35 times size < lower 2 layer ms65.8 ms81.7x Random # of bytes alloc/free 35 times full range ms370.5 ms11.2 x

15 Discussion Warp level group allocation Dynamic expanding free queue

16 Backup Slides

17 Slow atomicCAS() loop long ret=now; do{ now=ret; ret=atomicCAS(&head, now, now->next); }while(ret!=now);


Download ppt "A non-blocking approach on GPU dynamical memory management Joy NVIDIA."

Similar presentations


Ads by Google