Intermediate Representation II Storage Allocation and Management EECS 483 – Lecture 18 University of Michigan Wednesday, November 8, 2006.

Slides:



Advertisements
Similar presentations
The University of Adelaide, School of Computer Science
Advertisements

Procedure Calls Prof. Sirer CS 316 Cornell University.
Computer Architecture CSCE 350
CPS3340 COMPUTER ARCHITECTURE Fall Semester, /17/2013 Lecture 12: Procedures Instructor: Ashraf Yaseen DEPARTMENT OF MATH & COMPUTER SCIENCE CENTRAL.
Procedures II (1) Fall 2005 Lecture 07: Procedure Calls (Part 2)
The University of Adelaide, School of Computer Science
CSE 5317/4305 L7: Run-Time Storage Organization1 Run-Time Storage Organization Leonidas Fegaras.
Prof. Necula CS 164 Lecture 141 Run-time Environments Lecture 8.
Chapter 8 Runtime Support. How program structures are implemented in a computer memory? The evolution of programming language design has led to the creation.
COMP3221: Microprocessors and Embedded Systems Lecture 12: Functions I Lecturer: Hui Wu Session 2, 2005.
Stacks and HeapsCS-3013 A-term A Short Digression on Stacks and Heaps CS-3013 Operating Systems A-term 2008 (Slides include materials from Modern.
1 Storage Registers vs. memory Access to registers is much faster than access to memory Goal: store as much data as possible in registers Limitations/considerations:
Digression on Stack and Heaps CS-502 (EMC) Fall A Short Digression on Stacks and Heaps CS-502, Operating Systems Fall 2009 (EMC) (Slides include.
1 Chapter 7: Runtime Environments. int * larger (int a, int b) { if (a > b) return &a; //wrong else return &b; //wrong } int * larger (int *a, int *b)
Memory Allocation. Three kinds of memory Fixed memory Stack memory Heap memory.
1 1 Lecture 4 Structure – Array, Records and Alignment Memory- How to allocate memory to speed up operation Structure – Array, Records and Alignment Memory-
CS 536 Spring Run-time organization Lecture 19.
3/17/2008Prof. Hilfinger CS 164 Lecture 231 Run-time organization Lecture 23.
Run time vs. Compile time
Semantics of Calls and Returns
Overview C programming Environment C Global Variables C Local Variables Memory Map for a C Function C Activation Records Example Compilation.
The environment of the computation Declarations introduce names that denote entities. At execution-time, entities are bound to values or to locations:
Run-time Environment and Program Organization
1 Run time vs. Compile time The compiler must generate code to handle issues that arise at run time Representation of various data types Procedure linkage.
Stacks and HeapsCS-502 Fall A Short Digression Stacks and Heaps CS-502, Operating Systems Fall 2007 (Slides include materials from Operating System.
Chapter 8 :: Subroutines and Control Abstraction
Chapter 7: Runtime Environment –Run time memory organization. We need to use memory to store: –code –static data (global variables) –dynamic data objects.
Runtime Environments What is in the memory? Runtime Environment2 Outline Memory organization during program execution Static runtime environments.
Runtime Environments Compiler Construction Chapter 7.
Programming Language Principles Lecture 24 Prepared by Manuel E. Bermúdez, Ph.D. Associate Professor University of Florida Subroutines.
Compiler Construction
CSc 453 Runtime Environments Saumya Debray The University of Arizona Tucson.
Functions and Procedures. Function or Procedure u A separate piece of code u Possibly separately compiled u Located at some address in the memory used.
CPSC 388 – Compiler Design and Construction Runtime Environments.
Copyright © 2005 Elsevier Chapter 8 :: Subroutines and Control Abstraction Programming Language Pragmatics Michael L. Scott.
Lesson 13 CDT301 – Compiler Theory, Spring 2011 Teacher: Linus Källberg.
Activation Records CS 671 February 7, CS 671 – Spring The Compiler So Far Lexical analysis Detects inputs with illegal tokens Syntactic analysis.
Lecture 18: 11/5/2002CS170 Fall CS170 Computer Organization and Architecture I Ayman Abdel-Hamid Department of Computer Science Old Dominion University.
1 Control Abstraction (Section ) CSCI 431 Programming Languages Fall 2003 A compilation of material developed by Felix Hernandez-Campos and Michael.
COP4020 Programming Languages Subroutines and Parameter Passing Prof. Xin Yuan.
Activation Records (in Tiger) CS 471 October 24, 2007.
Run-Time Storage Organization Compiler Design Lecture (03/23/98) Computer Science Rensselaer Polytechnic.
Intermediate Representation I High-Level to Low-Level IR Translation.
RUN-Time Organization Compiler phase— Before writing a code generator, we must decide how to marshal the resources of the target machine (instructions,
國立台灣大學 資訊工程學系 薛智文 98 Spring Run Time Environments (textbook ch# 7.1–7.3 )
CSC 8505 Compiler Construction Runtime Environments.
7. Runtime Environments Zhang Zhizheng
RUNTIME ENVIRONMENT AND VARIABLE BINDINGS How to manage local variables.
LECTURE 13 Names, Scopes, and Bindings: Memory Management Schemes.
LECTURE 19 Subroutines and Parameter Passing. ABSTRACTION Recall: Abstraction is the process by which we can hide larger or more complex code fragments.
Storage Allocation Mechanisms
Computer structure: Procedure Calls
Run-time organization
Introduction to Compilers Tim Teitelbaum
Chapter 9 :: Subroutines and Control Abstraction
Calling Conventions Hakim Weatherspoon CS 3410, Spring 2012
Code Generation.
Chap. 8 :: Subroutines and Control Abstraction
Chap. 8 :: Subroutines and Control Abstraction
Memory Allocation CS 217.
Understanding Program Address Space
Topic 3-b Run-Time Environment
UNIT V Run Time Environments.
Procedures and Calling Conventions
Run Time Environments 薛智文
Runtime Environments What is in the memory?.
Where is all the knowledge we lost with information? T. S. Eliot
RUN-TIME STORAGE Chuen-Liang Chen Department of Computer Science
Topic 2b ISA Support for High-Level Languages
Presentation transcript:

Intermediate Representation II Storage Allocation and Management EECS 483 – Lecture 18 University of Michigan Wednesday, November 8, 2006

Classes of Storage in Processor v Registers »Fast access, but only a few of them »Address space not visible to programmer  Doesn’t support pointer access! v Memory »Slow access, but large »Supports pointers v Storage class for each variable generally determined when map HIR to LIR

- 2 - Storage Class Selection v Standard (simple) approach »Globals/statics – memory »Locals  Composite types (structs, arrays, etc.) – memory  Scalars u Accessed via ‘&’ operator? – memory u Rest – Virtual register, later we will map virtual registers to true machine registers. Note, as a result, some local scalars may be “spilled to memory” v All memory approach »Put all variables into memory »Register allocation relocates some mem vars to registers

Distinct Regions of Memory v Code space – Instructions to be executed »Best if read-only v Static (or Global) – Variables that retain their value over the lifetime of the program v Stack – Variables that is only as long as the block within which they are defined (local) v Heap – Variables that are defined by calls to the system storage allocator (malloc, new)

- 4 - Memory Organization Code Static Data Stack Heap... Code and static data sizes determined by the compiler Stack and heap sizes vary at run-time Stack grows downward Heap grows upward Some ABI’s have stack/heap switched

- 5 - Class Problem Specify whether each variable is stored in register or memory. For memory which area of the memory? int a; void foo(int b, double c) { int d; struct { int e; char f;} g; int h[10]; char i = 5; float j; }

- 6 - Variable Binding v Definitions: »Environment – A function that maps a name to a storage location »State – A function that maps a storage location to a value v When an environment associates a storage location S with a name N, we say N is bound to S »If N is a composite type, then N might be bound to a set of locations (usually contiguous, though not required)

- 7 - Static Allocation v Static storage has fixed allocation that is unchanged during program execution v Used for: »Global variables »Constants »All static variables in C have this (hence a global lifetime!) int count (int n) { static int sum = 0; sum += n; } sum has local visibility coupled with a global lifetime  lots of bugs!

- 8 - Heap Allocation v Parcels out pieces of continuous storage v Pieces may be deallocated in any order »Over time, heap will consist of alternate areas of free and in-use sections of memory v Heap is global »Items exist until explicitly freed »Or if support garbage collection, until no-one points to the piece of memory »Heap management is for the OS people to worry about! (e.g., first-fit, best-fit,...)

- 9 - Accessing Static/Heap Variables v Static »Addresses are known to compiler  Assigned by linker »Compiler backend uses symbolic names (labels)  Same for branch addresses v Heap »Are unnamed locations »Can be accessed only by dereferencing variables which hold their addresses

Run-Time Stack v A frame (or activation record) for each function execution »Represents execution environment of the function  Per invocation! Recursive function – each dynamic call has its own frame »Includes: local variables, parameters, return value, temporary storage (register spill) v Run-time stack of frames »Push frame of f on stack when program calls f »Pop stack frame when f returns »Top frame = fame of currently executing function

Stack Pointers v Assuming stack grows downwards »Address of top of stack increases v Values of current frame accessed using 2 pointers »Stack pointer (SP): points to frame top »Frame pointer (FP): points to frame base »Variable access: use offset from FP (SP) Prev Frame Top Frame SP FP stack grows

Why 2 Stack Pointers? v For fun – not quite! v Keep small offsets »Instruction encoding limits sizes of literals that can be encoded v Real reason »Stack frame size not always known at compile time »Example: alloca (dynamic allocation on stack) Prev Frame Top Frame SP FP stack grows

Anatomy of a Stack Frame Param 1... Param n Return address Previous FP Local 1... Local n Temp 1... Temp n Param 1... Param n Return address Outgoing parameters Incoming parameters SP FP Current frame responsibility of the callee Previous frame responsibility of the caller

Stack Frame Construction Example int f(int a) { int b, c; } void g(int a) { int b, c;... b = f(a+c);... } main() { int a, b;... g(a+b);... } a b a + b FP for main ret addr to main b c a + c FP for g ret addr to g b main g f c parameter local var... Note: I have left out the temp part of the stack frame

Class Problem For the following program: int foo(int a) { int x; if (a = 1) return 1; x = foo(a-1) + foo(a-2); return (x); } main() { int y, z = 10; y = foo(z); } 1. Show the first 3 stack frames created when this program is executed (starting with main). 2. Whats the maximum number of frames the stack grows to during the execution of this program?

Static Links v Problem for languages with nested functions (Pascal, ML): How do we access local variables from other frames? v Need a static link: a pointer to the frame of the enclosing function »Defined away in C as C has only a 2 level binding v Previous FP = dynamic link, ie, pointer to the previous frame in the current execution Previous FP Local 1... Local n Temp 1... Temp n Param 1... Param n Return address SP FP Static link

Saving Registers v Problem: Execution of invoked function may overwrite useful values in registers v Generated code must: »Save registers when function is invoked »Restore registers when function returns v Possibilities »Callee saves/restores registers »Caller saves/restores registers »Split up the job, ie both do part of it

Calling Sequences v How to generate the code that builds the frames? v Generate code which pushes values on stack: »Before call instructions (caller responsibilities) »At function entry (callee responsibilities) v Generate code which pops values off stack: »After call instructions (caller responsibilities) »At return instructions (callee responsibilities)

Push Values on Stack v Code before call instruction »Push each actual parameter »Push caller-saved registers »Push static link (if necessary) »Push return address (current PC) and jump to caller code v Prologue = code at function entry »Push dynamic link (ie FP) »Old stack pointer becomes new frame pointer »Push callee-saved registers »Push local variables

Pop Values from Stack v Epilogue = code at return instruction »Pop (restore) callee-saved registers »Store return value at appropriate place »Restore old stack pointer (pop callee frame) »Pop old frame pointer »Pop return address and jump to that address v Code after call »Pop (restore) caller-saved registers »Use return value

Example Call v Consider call foo(3,5), assume machine has 2 registers r1, r2 that are both callee save v Code before call instruction »push arg1: [sp] = 3 »pushd arg2: [sp] = 5 »make room for return address and 2 args: sp = sp+12 »call foo v Prologue »push old frame pointer: [sp] = fp »compute new fp: fp = sp »push r1, r2: [sp+4] = r1, [sp+8] = r2 »create frame with 3 local (int) variables, sp = sp+24

Example Call, continued v Epilogue »pop r1, r2: r1 = [sp-16], r2 = [sp-20] »restore old fp: fp = [sp-4], sp=sp-4 »pop frame: sp = sp-24 »pop return address and execute return: rts v Code after call »use return value »pop args: sp = sp-12

Accessing Stack Variables v To access stack variables: use offsets from FP v Example »[fp-8] = param n »[fp-24] = param 1 »[fp+4] = local 1 Param 1... Param n Return address Previous FP Local 1... Local n Temp 1... Temp n Param 1... Param n Return address SP FP FP-24 FP-8 FP+4

Class Problem int foo(int a, int b) { int x, y, z;... z = foo(x,y);... return z; } Assume you are mapping this function onto a processor that has 4 general registers, r1, r2, r3, r4. r1/r2 are caller save, r3-r4 are callee save. Show how the stack frame is constructed for this recursive function. You should assume the registers r1-r4 contain useful values. What if they do not?

Data Layout v Naive layout strategies generally employed »Place the data in the order the programmer declared it! v 2 issues: size, alignment v Size – How many bytes is the data item? »Base types have some fixed size  E.g., char, int, float, double »Composite types (structs, unions, arrays)  Overall size is sum of the components (not quite!)  Calculate an offset for each field

Memory Alignment v Cannot arbitrarily pack variables into memory  Need to worry about alignment v Golden rule – Address of a variable is aligned based on the size of the variable »Char is byte aligned (any addr is fine) »Short is halfword aligned (LSB of addr must be 0) »Int is word aligned (2 LSBs of addr must be 0) »This rule is for C/C++, other languages may have a slightly different rules

Structure Alignment (for C) v Each field is layed out in the order it is declared using Golden Rule for aligning v Identify largest field »Starting address of overall struct is aligned based on the largest field »Size of overall struct is a multiple of the largest field »Reason for this is so can have an array of structs

Structure Example struct { char w; int x[3] char y; short z; } Largest field is int (4 bytes) struct size is multiple of 4 struct starting addr is word aligned Struct must start at word-aligned address char w  1 byte, start anywhere x[3]  12 bytes, but must start at word aligned addr, so 3 empty bytes between w and x char y  1016, start anywhere short z  2 bytes, but must start at halfword aligned addr, so 1 empy byte between y and z Total size = 20 bytes!

Class Problem short a[100]; char b; int c; double d; short e; struct { char f; int g[1]; char h[2]; } i; How many bytes of memory does the following sequence of C declarations require (int = 4 bytes) ?