Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Compiler Construction (CS-636) Muhammad Bilal Bashir UIIT, Rawalpindi.

Similar presentations


Presentation on theme: "1 Compiler Construction (CS-636) Muhammad Bilal Bashir UIIT, Rawalpindi."— Presentation transcript:

1 1 Compiler Construction (CS-636) Muhammad Bilal Bashir UIIT, Rawalpindi

2 Outline 1. Storage Organization 2. Heap Management 1. The Memory Manager 2. The Memory Hierarchy of a Computer 3. Locality in Programs 4. Reducing Fragmentation 5. Manual Deallocation Requests 3. Introduction to Garbage Collection 4. Summary 2

3 Run-Time Environments Lecture: 23-24 3

4 Assignment# 3 Question: Explain the concept of ‘Dependency Graphs’ including;  What is dependency graph and where is it used?  Provide TWO examples to show how the dependency graphs are construed. Due Date: 22 nd January, 2015 Note: The assignment must be hand-written and must be submitted in the class. Late submission will not be accepted. 4

5 Practical Work Consider the following function; int factorial( int n ) { int result = 1; if( n>1 ) { result = n * factorial( n-1 ); } return result; } void main() { factorial( 5 ); } Draw activation tree & Control Stack view at the call factorial( 2 ); 5

6 Storage Organization A compiler of language like C++ on an operating system like Linux might subdivide memory as shown in figure  Run-time storage comes in blocks of contiguous bytes  A byte is the smallest unit of addressable memory  A byte is eight bits and 4 bytes form a machine word  Multibyte objects are stored in consecutive bytes 6 Code Static Heap Free Memory Stack

7 Storage Organization (Continue…) The size of target code is fixed at compile time and is placed in statically determined area Code The Static area contains the global constants and information to support garbage collection etc. but the size of this data must be known at compile time To maximize utilization of space at run time, the other two areas Stack and Heap are at opposite ends 7 Code Static Heap Free Memory Stack

8 Heap Management The heap is the portion of the store that is used for data that lives indefinitely, or until the program explicitly deletes it Unlike local variables, many languages enable us to create objects whose existence is not tied to the procedure activation that creates them Java gives the programmer new to create objects that may be passed procedure to procedure so they continue to exist longer the procedure that created them is gone  Such objects are created on Heap 8

9 Heap Management (Continue…) The process of finding spaces within the heap that are no longer used by the program is called garbage collection Such spaces can be reallocated to house other data items For languages like Java, it is the garbage collector that deallocates the memory 9

10 The Memory Manager The memory manager keeps track of all the free space in heap at all times & perform two functions; 1. Allocation  When a program requests memory for a variable or object, the memory manager provides a chunk of contiguous heap memory of the requested size  If possible, it satisfies an allocation request using free space in the heap  If no chunk of needed size is available, it seeks to increase the heap storage space by getting consecutive bytes of virtual memory from the operating system  If space is exhausted, the requesting program is informed 10

11 The Memory Manager (Continue…) Deallocation  The memory manager returns deallocated space to the pool of free space, so it can reuse the space to satisfy other allocation requests  Memory manager typically do not return memory to the operating system, even if the program’s heap storage drops Memory management would be simpler if (a) all allocation requests were for chunks of the same size and (b) storage were released predictably In most languages neither of (a) and (b) holds 11

12 The Memory Manager (Continue…) The memory manager must be prepared to service, in any order, allocation and deallocation requests of any size, ranging from one byte to as large as the program’s entire address space The desired properties of memory manager are;  Space Efficiency: Memory manager should minimize the total heap space needed by a program. Space efficiency is achieved by minimizing “fragmentation”  Program Efficiency: Memory manager should make good use of the memory so programs can run faster. The time taken to execute an instruction can vary widely depending on where objects are placed in memory 12

13 The Memory Manager (Continue…) Properties of memory manager (Continue…)  Low Overhead: In many programs the allocations and deallocations are frequent operations so it is important that these operations be as efficient as possible. That is, we wish to minimize the overhead – the fraction of execution time spend performing allocation and deallocation 13

14 The Memory Hierarchy of a Computer Memory management & compiler optimization must be done with awareness of how memory behaves Efficiency of a program is determined by not only how many instructions executed but also with how long it takes to execute instructions Time taken to execute an instruction can vary because time taken to access different parts of memory can vary from nanoseconds to milliseconds We can build small and fast storage or large and slow storage but not the one which is large and fast 14

15 The Memory Hierarchy of a Computer (Continue…) All the modern computers arrange their storage as memory hierarchy A memory hierarchy consists of a series of storage elements with the smaller faster ones “closer” to the processor and the large slower ones further away With each memory access, the machine searches each level of the memory in succession, starting with the lowest level, until it locates the data 15

16 The Memory Hierarchy of a Computer (Continue…) Between main memory and cache data is transferred in blocks known as cache lines, which are typically from 32 to 256 bytes Between virtual memory and main memory, data is transferred in blocks known as pages, typically between 4K and 64K bytes in size 16

17 Locality in Programs Most programs exhibit a high degree of locality, that is, they spent most of their time executing a small fraction of the code and touching only a small fraction of data We say that a program has temporal locality if the memory locations it accesses are likely to be accessed again within a short period of time We say that a program has spatial locality if memory locations close the location accessed are likely also be accessed within a short period of time 17

18 Locality in Programs (Continue…) The conventional wisdom is that program spend 90% of their time executing 10% of the code. Why?  Program often contain many instructions that are never executed. Programs built with components and libraries use only a small fraction of the provided functionality. Also as requirements change and programs evolve, legacy systems often contain many instructions that are no longer used.  Only a small fraction of the code that could be invoked is actually executed in a typical run of the program. For example, instructions to handle incorrect inputs and exceptional cases are seldom invoked in any particular run. 18

19 Locality in Programs (Continue…) Locality allows us to take advantage of the memory hierarchy By placing the most common instructions and data in the fast-but-small storage, we can lower the average memory-access time of a program It is not possible to tell which part of the code will be accessed the most. Even if we know, it is not possible to put all of it in fast storage  We therefore need to manage contents of fast storage dynamically and use it to hold instructions that are likely to be used heavily in the near future 19

20 Locality in Programs (Continue…) Optimization using the memory hierarchy  The policy of keeping the most recently used instructions in the cache tends to work well  When a new instruction is executed, there is a high probability that the next instruction will also be executed  One effective technique to improve the spatial locality of instructions is to have the compiler place basic blocks that are likely to follow each other – on the same page or even on the same cache line  Temporal or spatial locality of data accesses can also be improved by changing the order of the computations We can bring data from slow level of memory to faster level once and perform all computations at the same time 20

21 Reducing Fragmentation In the beginning of program execution, the heap is one contiguous unit of free space As the program allocates and deallocates the memory, this space is broken up into free and used chunks of memory whereas the free chunks need not reside in the contiguous area of the heap We refer to free chunks of memory as holes With each allocation request, the memory manager must place the requested chunk of memory into a large-enough hole 21

22 Reducing Fragmentation (Continue…) With each deallocation request, the freed chunks of memory are added back to the pool of free space We coalesce contiguous holes into larger holes, as the holes can get smaller otherwise If we are not careful, the memory may end up getting fragmented, consisting of large number of small, noncontiguous holes It is then possible that no hole is large enough to satisfy a future request even though there may be sufficient aggregate free space 22

23 Reducing Fragmentation (Continue…) Best-Fit and Next-Fit Object Placement  We reduce fragmentation by controlling how memory manager places new objects in heap  It has been found empirically that a good strategy for minimizing fragmentation for real-life programs is to allocate the requested memory in the smallest available hole which is large-enough  The best-fit algorithm tends to spare the large holes to satisfy subsequent, large requests  An alternative, called first-fit, where an object is placed in the first hole in which it fits, takes less time to place objects, but has been found inferior to best-fit in overall performance 23

24 Reducing Fragmentation (Continue…) Managing and Coalescing Free Space  When an object is deallocated manually, the memory manager must make its chunk free, so it can be allocated again  In some circumstance, it may also be possible to combine ( coalesce ) that chunk with adjacent chunks of the heap, to form a larger chunk  We combine chunks because a large chunk can always do the work of small chunks of equal total size but many small chunks cannot hold one large object 24

25 Manual Deallocation Requests In languages like C and C++, the programmer must explicitly arrange for deallocation of data Ideally any storage that will no longer be accessed should be deleted Conversely, any storage that may be referenced must not be deleted Unfortunately, it is hard to enforce both of the above mentioned properties Now we shall see what problems may arise if programmer fail to deallocate storage properly 25

26 Manual Deallocation Requests (Continue…) Problems with Manual Deallocation  Manual memory management is error-prone  Failing ever to delete data that cannot be referenced is called a memory leak error  Referencing deleted data is a dangling-pointer-deference error  It is hard for programmer to tell if a program will never refer to some storage in the future, hence this is the first common mistake  Memory leak may slow down the execution of a program due to increased memory usage, they do not affect program correctness 26

27 Manual Deallocation Requests (Continue…) Problems with Manual Deallocation (Continue…)  Many programs can tolerate memory leak, especially if the leakage is slow but for long running programs like operating system, it is important they do not have leaks  Automatic garbage collection gets rid of memory leaks by deallocating all the garbage  Being overly zealous about deleting objects can lead to even worse problems than memory leaks, which is second common mistake  Pointers to storage that have been deallocated are known as dangling pointers 27

28 Manual Deallocation Requests (Continue…) Problems with Manual Deallocation (Continue…)  Once the freed space has been reallocated to a new variable, any read, write, or deallocation via the dangling pointer can produce seemingly random effects There exists some workarounds to handle the issues that can arise by manual deallocation but in modern languages they have been avoided by including automatic garbage collection 28

29 Introduction to Garbage Collection Data that cannot be referenced is generally known as garbage Many high-level programming languages remove the burden of manual memory management from the programmer by offering automatic garbage collection Garbage collector deallocates unreachable data Garbage collection dates back to the initial implementation of Lisp in 1958. Other languages that offer garbage collection include Java, Perl, ML, Modula-3, Prolog, and Smaltalk 29

30 30 Summary Any Questions?


Download ppt "1 Compiler Construction (CS-636) Muhammad Bilal Bashir UIIT, Rawalpindi."

Similar presentations


Ads by Google