Presentation is loading. Please wait.

Presentation is loading. Please wait.

Theory of Compilation 236360 Erez Petrank Lecture 9: Runtime (part 2); object oriented issues 1.

Similar presentations


Presentation on theme: "Theory of Compilation 236360 Erez Petrank Lecture 9: Runtime (part 2); object oriented issues 1."— Presentation transcript:

1 Theory of Compilation 236360 Erez Petrank Lecture 9: Runtime (part 2); object oriented issues 1

2 Runtime Environment Code generated by the compiler to handle stuff that the programmer does not wish to handle. For example: file handling, memory management, synchronization (create threads, implement locks, etc.), runtime stack (activation records), etc. We discussed activation records and will next present an introduction to memory management.

3 Dynamic Memory Management: Introduction There is a course about this topic: 236780 “Algorithms for dynamic memory management” 3

4 Static and Dynamic Variables Static variables are defined in a method and are allocated on the runtime stack, as explained in the first part of this lecture. Sometimes there is a need for allocation during the run. – E.g., when managing a linked-list whose size is not predetermined. This is dynamic allocation. In C, “malloc” allocates a space and “delete” says that the program will not use this space anymore. 4 Ptr = malloc (256 bytes); /* Use ptr */ Free (Ptr);

5 Dynamic Memory Allocation In Java, “new” allocates an object for a given class. – President obama = new President But there is no instruction for manually deleting the object. It is automatically reclaimed by a garbage collector when the program “does not need it” anymore. 5 course c = new course(236360); c.class = “TAUB 2”; Faculty.add(c);

6 Manual Vs. Automatic Memory Management A manual memory management lets the programmer decide when objects are deleted. A memory manager that lets a garbage collector delete objects is called automatic. Manual memory management creates severe debugging problems – Memory leaks, – Dangling pointers. In large projects where objects are shared between various components it is sometimes difficult to tell when an object is not needed anymore. Considered the BIG debugging problem of the 80’s What is the main debugging problem today? 6

7 Automatic Memory Reclamantion When the system “knows” the object will not be used anymore, it reclaims its space. Telling whether an object will be used after a given line of code is undecidable. Therefore, a conservative approximation is used. An object is reclaimed when the program has “no way of accessing it”. Formally, when it is unreachable by a path of pointers from the “root” pointers, to which the program has direct access. – Local variables, pointers on stack, global (class) pointers, JNI pointers, etc. It is also possible to use code analysis to be more accurate sometimes. 7

8 What’s good about automatic “garbage collection”? © Erez Petrank8 Software engineering: – Relieves users of the book-keeping burden. – Stronger reliability, fewer bugs, faster debugging. – Code understandable and reliable. (Less interaction between modules.) Security (Java): – Program never gets a pointer to “play with”.

9 Importance Memory is the bottleneck in modern computation. – Time & energy (and space). Optimal allocation (even if all accesses are known in advance to the allocator) is NP-Complete (to even approximate). Must be done right for a program to run efficiently. Must be done right to ensure reliability. 9

10 GC and languages © Erez Petrank10 Sometimes it’s built in: – LISP, Java, C#. – The user cannot free an object. Sometimes it’s an added feature: – C, C++. – User can choose to free objects or not. The collector frees all objects not freed by the user. Most modern languages are supported by garbage collection.

11 Most modern languages rely on GC © Erez Petrank11 Source: “The Garbage Collection Handbook” by Richard Jones, Anthony Hosking, and Eliot Moss. 61 7

12 What’s bad about automatic “garbage collection”? © Erez Petrank12 It has a cost: – Old Lisp systems 40%. – Today’s Java program (if the collection is done “right”) 5- 15%. Considered a major factor determining program efficiency. Techniques have evolved since the 60’s. We will only survey basic techniques.

13 Garbage Collection Efficiency Overall collection time (percentage of running time). Pauses in program run. Space overhead. Cache Locality (efficiency and energy). 13

14 Three classical algorithms Reference counting Mark and sweep (and mark-compact) Copying. The last two are also called tracing algorithms because they go over (trace) all reachable objects. 14

15 Reference counting [Collins 1960] © Erez Petrank15 Recall that we would like to know if an object is reachable from the roots. Associate a reference count field with each object: how many pointers reference this object. When nothing points to an object, it can be deleted. Very simple, used in many systems.

16 Basic Reference Counting © Erez Petrank16 Each object has an RC field, new objects get o.RC:=1. When p that points to o 1 is modified to point to o 2 we execute: o 1.RC--, o 2.RC++. if then o 1.RC==0: – Delete o 1. – Decrement o.RC for all “children” of o 1. – Recursively delete objects whose RC is decremented to 0. o1o1 o2o2 p

17 A Problem: Cycles © Erez Petrank17 The Reference counting algorithm does not reclaim cycles! Solution 1: ignore cycles, they do not appear frequently in modern programs. Solution 2: run tracing algorithms (that can reclaim cycles) infrequently. Solution 3: designated algorithms for cycle collection. Another problem for the naïve algorithm: requires a lot of synchronization in parallel programs. Advanced versions solve that.

18 The Mark-and-Sweep Algorithm [McCarthy 1960] © Erez Petrank18 Mark phase: – Start from roots and traverse all objects reachable by a path of pointers. – Mark all traversed objects. Sweep phase: – Go over all objects in the heap. – Reclaim objects that are not marked.

19 The Mark-Sweep algorithm © Erez Petrank19 Traverse live objects & mark black. White objects can be reclaimed. registers Roots Note! This is not the heap data structure!

20 Triggering © Erez Petrank20 New(A)= if free_list is empty mark_sweep() if free_list is empty return (“out-of-memory”) pointer = allocate(A) return (pointer) Garbage collection is triggered by allocation.

21 Basic Algorithm © Erez Petrank 21 mark_sweep()= for Ptr in Roots mark(Ptr) sweep() mark(Obj)= if mark_bit(Obj) == unmarked mark_bit(Obj)=marked for C in Children(Obj) mark(C) Sweep()= p = Heap_bottom while (p < Heap_top) if (mark_bit(p) == unmarked) then free(p) else mark_bit(p) = unmarked; p=p+size(p)

22 Properties of Mark & Sweep © Erez Petrank22 Most popular method today (at a more advanced form). Simple. Does not move objects, and so heap may fragment. Complexity: Mark phase: live objects (dominant phase)  Sweep phase: heap size. Termination: each pointer traversed once. Various engineering tricks are used to improve performance.

23 During the run objects are allocated and reclaimed. Gradually, the heap gets fragmented. When space is too fragmented to allocate, a compaction algorithm is used. Move all live objects to the beginning of the heap and update all pointers to reference the new locations. Compaction is considered very costly and we usually attempt to run it infrequently, or only partially. Mark-Compact 23 The Heap

24 An Example: The Compressor A simplistic presentation of the Compressor: Go over the heap and compute for each live object where it moves to – To the address that is the sum of live space before it in the heap. – Save the new locations in a separate table. Go over the heap and for each object: – Move it to its new location – Update all its pointers. Why can’t we do it all in a single heap pass? (In the full algorithm: succinct table, execute the first pass quickly, and parallelization.) 24

25 Mark Compact Important parameters of a compaction algorithm: – Keep order of objects? – Use extra space for compactor data structures? – How many heap passes? – Can it run in parallel on a multi-processor? We do not elaborate in this intro. 25

26 Copying garbage collection © Erez Petrank26 Heap partitioned into two. Part 1 takes all allocations. Part 2 is reserved. During GC, the collector traces all reachable objects and copies them to the reserved part. After copying the parts roles are reversed: Allocation activity goes to part 2, which was previously reserved. Part 1, which was active, is reserved till next collection. 12

27 Copying garbage collection © Erez Petrank27 Part IPart II Roots A D C B E

28 The collection copies… © Erez Petrank28 Part IPart II Roots A D C B E A C

29 Roots are updated; Part I reclaimed. © Erez Petrank29 Part IPart II Roots A C

30 Properties of Copying Collection © Erez Petrank30 Compaction for free Major disadvantage: half of the heap is not used. “Touch” only the live objects – Good when most objects are dead. – Usually most new objects are dead, and so there are methods that use a small space for young objects and collect this space using copying garbage collection.

31 A very simplistic comparison CopyingMark & sweepReference Counting Live objects Size of heap (live objects) Pointer updates + dead objects Complexity Half heap wasted Bit/object + stack for DFS Count/object + stack for DFS Space overhead For freeAdditional work Compaction long Mostly shortPause time Cycle collectionMore issues

32 Modern Memory Management Considers standard program properties. Handle parallelism: – Stop the program and collect in parallel on all available processors. – Run collection concurrently with the program run. Cache consciousness. Real-time. 32

33 Some terms to be remembered © Erez Petrank33 Heap, objects Allocate, free (deallocate, delete, reclaim) Reachable, live, dead, unreachable Roots Reference counting, mark and sweep, copying, compaction, tracing algorithms Fragmentation

34 Recap Lexical analysis – regular expressions identify tokens (“words”) Syntax analysis – context-free grammars identify the structure of the program (“sentences”) Contextual (semantic) analysis – type checking defined via typing judgements – can be encoded via attribute grammars – Syntax directed translation Intermediate representation – many possible IRs; generation of intermediate representation; 3AC; backpatching Runtime: – services that are always there: function calls, memory management, threads, etc. 34

35 OO Issues 35

36 36 Representing Data at Runtime Source language types – int, boolean, string, object types Target language types – Single bytes, integers, address representation Compiler should map source types to some combination of target types – Implement source types using target types

37 37 Basic Types int, boolean, string, void Arithmetic operations – Addition, subtraction, multiplication, division, remainder Can be mapped directly to target language types and operations

38 38 Pointer Types Represent addresses of source language data structures Usually implemented as an unsigned integer Pointer dereferencing – retrieves pointed value May produce an error – Null pointer dereference – when is this error triggered?

39 39 Object Types An object is a record with built in methods and some additional features. Basic operations – Field selection + read/write computing address of field, dereferencing address – Copying copy block (not Java) or field-by-field copying – Method invocation Identifying method to be called, calling it How does it look at runtime?

40 40 Object Types class Foo { int x; int y; void rise() {…} void shine() {…} } x y rise shine Compile time information Runtime memory layout for object of class Foo DispacthVectorPtr

41 41 Field Selection x y rise shine Compile time information Runtime memory layout for object of class Foo DispacthVectorPtr Foo f; int q; q = f.x; MOV f, %EBX MOV 4(%EBX), %EAX MOV %EAX, q base pointer field offset from base pointer

42 42 Object Types - Inheritance x y rise shine Compile time information Runtime memory layout for object of class Bar twinkle z DispacthVectorPtr class Foo { int x; int y; void rise() {…} void shine() {…} } class Bar extends Foo{ int z; void twinkle() {…} }

43 43 Object Types - Polymorphism class Foo { … void rise() {…} void shine() {…} } x y Runtime memory layout for object of class Bar class Bar extends Foo{ … } z class Main { void main() { Foo f = new Bar(); f.rise(); } f Pointer to Bar Pointer to Foo inside Bar DVPtr

44 44 Static & Dynamic Binding Which “rise” should is main() using? Static binding: f is of type Foo and therefore it always refers to Foo’s rise. Dynamic binding: f points to a Bar object now, so it refers to Bar’s rise. class Foo { … void rise() {…} void shine() {…} } class Bar extends Foo{ void rise() {…} } class Main { void main() { Foo f = new Bar(); f.rise(); }

45 45 Typically, Dynamic Binding is used Finding the right method implementation at runtime according to object type Using the Dispatch Vector (a.k.a. Dispatch Table) class Foo { … void rise() {…} void shine() {…} } class Bar extends Foo{ void rise() {…} } class Main { void main() { Foo f = new Bar(); f.rise(); }

46 46 Dispatch Vectors in Depth Vector contains addresses of methods Indexed by method-id number A method signature has the same id number for all subclasses class Main { void main() { Foo f = new Bar(); f.rise(); } class Foo { … void rise() {…} void shine() {…} } 0 1 class Bar extends Foo{ void rise() {…} } 0 x y z f Pointer to Bar Pointer to Foo inside Bar DVPtr shine rise shine rise Dispatch vector for Bar Method code using Bar’s dispatch table

47 47 Dispatch Vectors in Depth class Main { void main() { Foo f = new Foo(); f.rise(); } class Foo { … void rise() {…} void shine() {…} } 0 1 class Bar extends Foo{ void rise() {…} } 0 x y f Pointer to Foo DVPtr shine rise shine rise using Foo’s dispatch table Dispatch vector for Foo Method code

48 48 Representing dispatch tables class A { void rise() {…} void shine() {…} static void foo() {…} } class B extends A { void rise() {…} void shine() {…} void twinkle() {…} } # data section.data.align 4 _DV_A:.long _A_rise.long _A_shine _DV_B:.long _B_rise.long _B_shine.long _B_twinkle

49 Multiple Inheritance 49 supertyping convert_ptr_to_E_to_ptr_to_C(e) = e convert_ptr_to_E_to_ptr_to_D(e) = e + sizeof (class C) subtyping convert_ptr_to_C_to_ptr_to_E(e) = c convert_ptr_to_D_to_ptr_to_E(e) = e - sizeof (class C) class C { field c1; field c2; void m1() {…} void m2() {…} } class D { field d1; void m3() {…} void m4() {…} } class E extends C,D{ field e1; void m2() {…} void m4() {…} void m5() {…} } c1 c2 DVPtr Pointer to E Pointer to C inside E DVPtr m2_C_E m1_C_C E-Object layout Dispatch vector d1 e1 m4_D_E m3_D_D m5_E_E Pointer to D inside E

50 50 Runtime checks generate code for checking attempted illegal operations – Null pointer check – Array bounds check – Array allocation size check – Division by zero – … If check fails jump to error handler code that prints a message and gracefully exists program

51 51 Null pointer check # null pointer check cmp $0,%eax je labelNPE labelNPE: push $strNPE# error message call __println push $1# error code call __exit Single generated handler for entire program

52 52 Array bounds check # array bounds check mov -4(%eax),%ebx # ebx = length mov $0,%ecx # ecx = index cmp %ecx,%ebx jle labelABE # ebx <= ecx ? cmp $0,%ecx jl labelABE # ecx < 0 ? labelABE: push $strABE # error message call __println push $1 # error code call __exit Single generated handler for entire program

53 53 Array allocation size check # array size check cmp $0,%eax# eax == array size jle labelASE # eax <= 0 ? labelASE: push $strASE # error message call __println push $1 # error code call __exit Single generated handler for entire program


Download ppt "Theory of Compilation 236360 Erez Petrank Lecture 9: Runtime (part 2); object oriented issues 1."

Similar presentations


Ads by Google