Carnegie Mellon Dynamic Memory Allocation 15-213: Introduction to Computer Systems Monday March 30th, 2015 1.

Slides:



Advertisements
Similar presentations
Pointers.
Advertisements

Malloc Recitation By sseshadr. Agenda Macros in C Pointer declarations Casting and Pointer Arithmetic Malloc.
Dynamic Memory Management
Carnegie Mellon 1 Dynamic Memory Allocation: Basic Concepts : Introduction to Computer Systems 17 th Lecture, Oct. 21, 2010 Instructors: Randy Bryant.
Dynamic Memory Allocation I Topics Simple explicit allocators Data structures Mechanisms Policies CS 105 Tour of the Black Holes of Computing.
Carnegie Mellon 1 Dynamic Memory Allocation: Basic Concepts / : Introduction to Computer Systems 18 th Lecture, March 24, 2015 Instructors:
1. Dynamic Memory Allocation. Dynamic Memory Allocation Programmers use dynamic memory allocators (such as malloc ) to acquire VM at run time.  For data.
1 Binghamton University Dynamic Memory Allocation: Advanced Concepts CS220: Computer Systems II.
Dynamic Memory Allocation I October 16, 2008 Topics Simple explicit allocators Data structures Mechanisms Policies lecture-15.ppt “The course that.
1 Optimizing Malloc and Free Professor Jennifer Rexford COS 217 Reading: Section 8.7 in K&R book
Malloc Recitation Section K (Kevin Su) November 5 th, 2012.
Carnegie Mellon 1 Debugging : Introduction to Computer Systems Recitation 12: Monday, Nov. 9 th, 2013 Yixun Xu Section K.
18-213: Introduction to Computer Systems April 6, 2015
Carnegie Mellon Introduction to Computer Systems /18-243, spring th Lecture, Mar. 31 st Instructors: Gregory Kesden and Markus Püschel.
Lab 3: Malloc Lab. “What do we need to do?”  Due 11/26  One more assignment after this one  Partnering  Non-Honors students may work with one other.
Memory Management Memory Areas and their use Memory Manager Tasks:
Dynamic Memory Allocation I Nov 5, 2002 Topics Simple explicit allocators Data structures Mechanisms Policies class21.ppt “The course that gives.
1 Optimizing Malloc and Free Professor Jennifer Rexford
Dynamic Memory Allocation I November 1, 2006 Topics Simple explicit allocators Data structures Mechanisms Policies class18.ppt “The course that.
1 Inner Workings of Malloc and Free Professor Jennifer Rexford COS 217.
L6: Malloc Lab Writing a Dynamic Storage Allocator October 30, 2006
Carnegie Mellon 1 Debugging Malloc Lab : Introduction to Computer Systems Recitation 12: Monday, Nov. 11 th, 2013 Marjorie Carlson Section A.
Dynamic Memory Allocation Questions answered in this lecture: When is a stack appropriate? When is a heap? What are best-fit, first-fit, worst-fit, and.
CS 11 C track: lecture 5 Last week: pointers This week: Pointer arithmetic Arrays and pointers Dynamic memory allocation The stack and the heap.
University of Washington Today Finished up virtual memory On to memory allocation Lab 3 grades up HW 4 up later today. Lab 5 out (this afternoon): time.
1 Dynamic Memory Allocation: Basic Concepts. 2 Today Basic concepts Implicit free lists.
Writing You Own malloc() March 29, 2003 Topics Explicit Allocation Data structures Mechanisms Policies class19.ppt “The course that gives CMU its.
Consider Starting with 160 k of memory do: Starting with 160 k of memory do: Allocate p1 (50 k) Allocate p1 (50 k) Allocate p2 (30 k) Allocate p2 (30 k)
Dynamic Memory Allocation II
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Dynamic Memory Allocation: Basic Concepts :
Dynamic Memory Allocation April 9, 2002 Topics Simple explicit allocators Data structures Mechanisms Policies Reading: 10.9 Problems: and class21.ppt.
University of Washington Today More memory allocation!
Malloc Lab : Introduction to Computer Systems Recitation 11: Nov. 4, 2013 Marjorie Carlson Recitation A.
Carnegie Mellon 1 Dynamic Memory Allocation: Basic Concepts Instructors: Adapted from CMU course
Lecture 7 Page 1 CS 111 Summer 2013 Dynamic Domain Allocation A concept covered in a previous lecture We’ll just review it here Domains are regions of.
CSE 351 Dynamic Memory Allocation 1. Dynamic Memory Dynamic memory is memory that is “requested” at run- time Solves two fundamental dilemmas: How can.
1 Dynamic Memory Allocation (II) Implementation. 2 Outline Implementation of a simple allocator Explicit Free List Segregated Free List Suggested reading:
Carnegie Mellon 1 Malloc Lab : Introduction to Computer Systems Friday, July 10, 2015 Shen Chen Xu.
University of Washington Implementation Issues How do we know how much memory to free given just a pointer? How do we keep track of the free blocks? How.
Dynamic Memory Management Jennifer Rexford 1. 2 Goals of this Lecture Dynamic memory management techniques Garbage collection by the run-time system (Java)
Carnegie Mellon 1 Malloc Recitation Ben Spinelli Recitation 11: November 9, 2015.
Carnegie Mellon Dynamic Memory Allocation : Introduction to Computer Systems Recitation 11: Monday, Nov 3, 2014 SHAILIN DESAI SECTION L 1.
Memory Management What if pgm mem > main mem ?. Memory Management What if pgm mem > main mem ? Overlays – program controlled.
Instructor: Phil Gibbons
Section 10: Memory Allocation Topics
Dynamic Memory Allocation I
Dynamic Memory Allocation
Memory Management I: Dynamic Storage Allocation Oct 7, 1999
Dynamic Memory Allocation
Dynamic Memory Allocation: Basic Concepts CS220: Computer Systems II
The Hardware/Software Interface CSE351 Winter 2013
Optimizing Malloc and Free
Memory Management I: Dynamic Storage Allocation March 2, 2000
User-Level Dynamic Memory Allocation: Malloc and Free
Dynamic Memory Allocation I
CS703 - Advanced Operating Systems
Dynamic Memory Allocation
Dynamic Memory Allocation I
Dynamic Memory Allocation: Basic Concepts /18-213/14-513/15-513: Introduction to Computer Systems 19th Lecture, October 30, 2018.
CSCI 380: Operating Systems Lecture #11 Instructor: William Killian
Optimizing Dynamic Memory Management
Memory Allocation II CSE 351 Autumn 2017
Memory Allocation II CSE 351 Winter 2018
Dynamic Memory Allocation November 2, 2000
Memory Allocation II CSE 351 Winter 2018
CSCI 380: Operating Systems Instructor: William Killian
Dynamic Memory Allocation: Basic Concepts CSCI 380: Operating Systems
Malloc Lab CSCI 380: Operating Systems
CS703 - Advanced Operating Systems
Presentation transcript:

Carnegie Mellon Dynamic Memory Allocation : Introduction to Computer Systems Monday March 30th,

Carnegie Mellon Today 2  Overview/Lecture Review  Macros and Inline Functions  Malloc Lab  Your mm_checkheap function

Carnegie Mellon  Overview/Lecture Review  Macros and Inline Functions  Malloc Lab  Your mm_checkheap function Today 3

Carnegie Mellon Dynamic Memory Allocation  Programmers use dynamic memory allocators (such as malloc) to acquire VM at run time.  For data structures whose size is only known at runtime  Dynamic memory allocators manage an area of process virtual memory known as the heap. 4

Carnegie Mellon Dynamic Memory Allocation: Example  How do we know where to put the next block? 5

Carnegie Mellon Keeping Track of Free Blocks  Method 1: Implicit list using length—links all blocks  Method 2: Explicit list among the free blocks using pointers  Method 3: Segregated free list  Different free lists for free blocks of different size classes 6

Carnegie Mellon Method 1: Implicit List  For each block, we need both size and allocation status Could store this information in two words: wasteful!  Standard trick If blocks are aligned, some low-order address bits are always 0 Instead of storing an always-0 bit, use it as a allocated/free flag ti First fit Next fit Best fit

Carnegie Mellon Method 2: Explicit List  Maintain list(s) of free blocks instead of all blocks The “next” free block could be anywhere So we need to store forward/back pointers, not just sizes Still need boundary tags for coalescing  Luckily we track only free blocks, so we can use payload area 8

Carnegie Mellon Method 2: Explicit Free Lists  Logically…  But physically… 9

Carnegie Mellon Method 3: Segregated List  Each size class of blocks has its own free list  Small sized blocks: more lists for separate classes  Larger sizes: one class for each two-power size 10

Carnegie Mellon Finding a Free Block  First fit: Search list from beginning, choose first free block that fits Can take linear time in total number of blocks (allocated and free )  In practice it can cause “splinters” at beginning of list Many small free blocks left at beginning 11

Carnegie Mellon Finding a Free Block  Next fit: Like first fit, but search list starting where previous search finished Should often be faster than first fit: avoids re-scanning unhelpful blocks Some research suggests that fragmentation is worse 12

Carnegie Mellon Finding a Free Block  Best fit: Search the list, choose the best free block: fits, with fewest bytes left over Keeps fragments small : usually improves memory utilization Will typically run slower than first fit  If the block we find is larger than we need, split it 13

Carnegie Mellon Finding a Free Block  What happens if we can’t find a block?  Need to extend the heap  Use the brk() or sbrk() system calls In mallocLab, use mem_sbrk() sbrk(requested space) allocates space and returns pointer to s tart of new space sbrk(0) returns pointer t o top of current heap  Use what you need, add the rest as a whole free block 14

Carnegie Mellon Splitting a Block  What happens if the block we have is too big?  Split between portion we need and the leftover free space  For implicit lists: correct the block size  For explicit lists: correct the previous and next pointers  For segregated lists: determine correct size list Insert with insertion policy (more on this later) 15

Carnegie Mellon Freeing Blocks  Simplest implementation:  Need only clear the “allocated” flag void free_block(ptr p) { *p = *p & -2 }  But can lead to external fragmentation: There is enough free space, but the allocator can’t find it 16

Carnegie Mellon Freeing Blocks  Need to combine blocks nearby in memory (coalescing)  For implicit lists: Simply look backwards and forwards using block sizes  For explicit lists: Look backwards/forwards using block sizes, not next/prev pointers  For segregated lists: use the size of new block to determine proper list Insert back into list based on insertion policy (LIFO, FIFO) 1ti

Carnegie Mellon Freeing Blocks  Graphical depiction (both implicit & explicit): (these are physical mappings) 18

Carnegie Mellon Insertion Policy  Where in the free list do you put a newly freed block?  LIFO (last-in-first-out) policy Insert freed block at the beginning of the free list Pro: simple and constant time Con: studies suggest fragmentation is worse than address ordered  Address-ordered policy Insert freed blocks so that free list blocks are always in address order: addr(prev) < addr(curr) < addr(next) Con: requires search Pro: fragmentation is lower than LIFO 19

Carnegie Mellon  Overview/Lecture Review  Macros and Inline Functions  Malloc Lab  Your mm_checkheap function Today 3

Carnegie Mellon Macros  C Preprocessor looks at macros in the preprocessing step of compilation  Use #define to avoid magic numbers: #define TRIALS 100  Function like macros - short and heavily used code snippets #define GET_BYTE_ONE(x)((x) & 0xff) #define GET_BYTE_TWO(x)(((x) >> 8) & 0xff)  Inline functions Ask the compiler to insert the complete body of the function in every place that the function is called (simply replacing code) inline int fun(int a, int b ) Requests compiler to insert assembly of fun wherever a call to fun is made  Both are useful for malloclab (Think pointer arithmetic!) 21

Carnegie Mellon contracts.h  REQUIRES(condition) //a precondition  ENSURES(condition) //a postcondition  ASSERT(condition) //an assertion  Using these will improve readability  Using these will reduce debugging time  Used properly they will NOT decrease performance  It is HIGHLY RECOMMENDED to use contracts in your code! 22

Carnegie Mellon More easy debugging tips  Using printf, assert, etc only in debug mode: #define DEBUG #ifdef DEBUG # define dbg_printf(...) printf(__VA_ARGS__) # define dbg_assert(...) assert(__VA_ARGS__) # define dbg(...) #else # define dbg_printf(...) # define dbg_assert(...) # define dbg(...) #endif 23

Carnegie Mellon  Overview/Lecture Review  Macros and Inline Functions  Malloc Lab (its here!)  Your mm_checkheap function Today 3

Carnegie Mellon Malloclab  You need to implement the following functions: int mm_init(void); void *malloc(size_t size); void free(void *ptr); Void * realloc(void * ptr, size_t size); void *calloc (size_t n, size_t size); void mm_checkheap(int verbose);  Scored on space efficiency and throughput  Cannot call system memory functions  Use helper functions (as static/inline functions)!!  May want to consider practicing version control 25

Carnegie Mellon Malloclab  Inline Essentially copies function code into location of each function call Avoids overhead of stack discipline/function call (once assembled) Can often be used in place of macros Strong type checking and input variable handling, unlike macros.  Static Resides in a single place in memory Limits scope of function to the current translations unit (file) Should use this for helper functions only called locally Avoids polluting namespace.  Static inline Not surprisingly, can be used together 26

Carnegie Mellon  Overview/Lecture Review  Macros and Inline Functions  Malloc Lab  Your mm_checkheap function Today 3

Carnegie Mellon Heap Checker  int mm_checkheap(int verbose) is critical for debugging Write this early update it when you change your free list implementation It should ensure that you haven’t lost control of any part of heap memory (everything should either be allocated or listed)  Look over lecture notes on garbage collection (particularly mark & sweep).  This function is meant to be correct, not efficient. 28

Carnegie Mellon Heap Checker  Once you’ve settled on a design, write the heap checker that checks all the invariants of the particular design  The checking should be detailed enough that the heap check passes if and only if the heap is truly well-formed  Call the heap checker before/after the major operations whenever the heap should be well-formed  Define macros to enable/disable it conveniently e.g. 29

The mm checkheap function takes a single integer argument that you can use any way you want. One very useful technique is to use this argument to pass in the line number of the call site: mm_checkheap(__LINE__); If mm checkheapdetects a problem with the heap, it can print the line number where mm checkheap was called, which allows you to call mm checkheap at numerous places in your code while you are debugging.  The mm_checkheap function takes a single integer argument that you can use any way you want.  One very useful technique is to use this argument to pass in the line number of the call site: mm_checkheap(__LINE__);  If mm_checkheap detects a problem with the heap, it can print the line number where mm_checkheap was called, which allows you to call mm_checkheap at numerous places in your code while you are debugging. Heap Checker

Carnegie Mellon Invariants (non-exhaustive)  Block level: Header and footer match Payload area is aligned  List level: Next/prev pointers in consecutive free blocks are consistent Free list contains no allocated blocks All free blocks are in the free list No contiguous free blocks in memory (unless you defer coalescing) No cycles in the list (unless you use circular lists) Segregated list contains only blocks that belong to the size class  Heap level: Prologue/Epilogue blocks are at specific locations (e.g. heap boundaries) and have special size/alloc fields All blocks stay in between the heap boundaries  And your own invariants (e.g. address order) 30

Carnegie Mellon Hare and Tortoise Algorithm  Detects cycles in linked lists  Set two pointers “hare” and “tortoise” to the beginning of the list  During each iteration, move the hare pointer forward two nodes and move the tortoise forward one node. If they are pointing to the same node after this, the list has a cycle.  If the tortoise reaches the end of the list, there are no cycles. 31

Carnegie Mellon Asking for help  It can be hard for the TAs to debug your allocator, because this is a more open-ended lab  Before asking for help, ask yourself some questions: What part of which trace file triggers the error? Around the point of the error, what sequence of events do you expect? What part of the sequence already happened?  If you can’t answer, it’s a good idea to gather more information… How can you measure which step worked OK? printf, breakpoints, watchpoints… 32  Most importantly, make sure your mm_checkheap function is written and works before asking a TA for help We WILL ask to see it!

Carnegie Mellon Debugging  Valgrind! Powerful debugging and analysis technique Rewrites text section of executable object file Can detect all errors as debugging malloc Can also check each individual reference at runtime Bad pointers Overwriting Referencing outside of allocated block  GDB You know how to use this (hopefully) 33  Malloc wrappers containing error checking code (from lecture) Be careful with these, they don’t test for all possible errors

Carnegie Mellon Beyond Debugging: Error prevention  It is hard to write code that are completely correct the first time, but certain practices can make your code less error-prone  Plan what each function does before writing code Draw pictures when linked list is involved Consider edge cases when the block is at start/end of list  Document your code as you write it 34

Error prevention: Most common errors  Dereferencing bad pointers  Reading uninitialized memory  Overwriting memory  Referencing nonexistent variables  Freeing blocks multiple times  Referencing freed blocks  Failing to free blocks  Be careful with your pointer arithmetic

Carnegie Mellon Questions?  Good luck!:D 35