Presentation is loading. Please wait.

Presentation is loading. Please wait.

Intermediate Representations. Front end - produces an intermediate representation (IR) Middle end - transforms the IR into an equivalent IR that runs.

Similar presentations


Presentation on theme: "Intermediate Representations. Front end - produces an intermediate representation (IR) Middle end - transforms the IR into an equivalent IR that runs."— Presentation transcript:

1 Intermediate Representations

2 Front end - produces an intermediate representation (IR) Middle end - transforms the IR into an equivalent IR that runs more efficiently Back end - transforms the IR into native code IR encodes the compiler’s knowledge of the program Middle end usually consists of several passes Front End Middle End Back End IR Source Code Target Code

3 Intermediate Representations Decisions in IR design affect the speed and efficiency of the compiler Some important IR properties  Ease of generation  Ease of manipulation  Procedure size  Freedom of expression  Level of abstraction The importance of different properties varies between compilers  Selecting an appropriate IR for a compiler is critical

4 Types of Intermediate Representations Three major categories Structural  Graphically oriented  Heavily used in source-to-source translators  Tend to be large Linear  Pseudo-code for an abstract machine  Level of abstraction varies  Simple, compact data structures  Easier to rearrange Hybrid  Combination of graphs and linear code  Example: control-flow graph Examples: Trees, DAGs Examples: 3 address code Stack machine code Example: Control-flow graph

5 Level of Abstraction The level of detail exposed in an IR influences the profitability and feasibility of different optimizations. Two different representations of an array reference: subscript Ai j loadI 1 => r 1 sub r j, r 1 => r 2 loadI 10 => r 3 mult r 2, r 3 => r 4 sub r i, r 1 => r 5 add r 4, r 5 => r 6 loadI @A => r 7 Add r 7, r 6 => r 8 load r 8 => r Aij High level AST: Good for memory disambiguation Low level linear code: Good for address calculation

6 Level of Abstraction Structural IRs are usually considered high-level Linear IRs are usually considered low-level Not necessarily true: + * 10 j 1 - j 1 - + @A load Low level AST loadArray A,i,j High level linear code

7 Abstract Syntax Tree An abstract syntax tree is the procedure’s parse tree with the nodes for most non-terminal nodes removed x - 2 * y Can use linearized form of the tree  Easier to manipulate than pointers x 2 y * - in postfix form - * 2 y x in prefix form S-expressions are (essentially) ASTs - x 2 y *

8 Directed Acyclic Graph A directed acyclic graph (DAG) is an AST with a unique node for each value Makes sharing explicit Encodes redundancy x 2 y * -  z /  w z  x - 2 * y w  x / 2

9 Stack Machine Code Originally used for stack-based computers, now Java Example: x - 2 * y becomes Advantages Compact form Introduced names are implicit, not explicit Simple to generate and execute code Useful where code is transmitted over slow communication links (the net ) push x push 2 push y multiply subtract Implicit names take up no space, where explicit ones do!

10 Three Address Code Several different representations of three address code In general, three address code has statements of the form: x  y op z With 1 operator (op ) and, at most, 3 names (x, y, & z) Example: z  x - 2 * ybecomes Advantages: Resembles many machines Introduces a new set of names Compact form t  2 * y z  x - t

11 Three Address Code: Quadruples Naïve representation of three address code Table of k * 4 small integers Simple record structure Easy to reorder Explicit names load1Y loadi22 mult321 load4X sub542 load r1, y loadI r2, 2 mult r3, r2, r1 load r4, x sub r5, r4, r3 RISC assembly codeQuadruples The original F ORTRAN compiler used “quads”

12 Three Address Code: Triples Index used as implicit name 25% less space consumed than quads Much harder to reorder loady loadI2 mult(1)(2) loadx sub(4)(3) (1) (2) (3) (4) (5) Implicit names take no space!

13 Three Address Code: Indirect Triples List first triple in each statement Implicit name space Uses more space than triples, but easier to reorder Major tradeoff between quads and triples is compactness versus ease of manipulation  In the past compile-time space was critical  Today, speed may be more important loady loadI2 mult(100)(101) loadx sub(103)(102) (100) (101) (102) (103) (104) (100) (105)

14 Static Single Assignment Form The main idea: each name defined exactly once Introduce  -functions to make it work Strengths of SSA-form Sharper analysis  -functions give hints about placement Original x  … y  … while (x < k) x  x + 1 y  y + x SSA-form x 0  … y 0  … if (x 0 > k) goto next loop: x 1   (x 0,x 2 ) y 1   (y 0,y 2 ) x 2  x 1 + 1 y 2  y 1 + x 2 if (x 2 < k) goto loop next: …

15 Two Address Code Allows statements of the form x  x op y Has 1 operator (op ) and, at most, 2 names (x and y) Example: z  x - 2 * y becomes Can be very compact Problems Machines no longer rely on destructive operations Difficult name space  Destructive operations make reuse hard  Good model for machines with destructive ops (PDP-11) t 1  2 t 2  load y t 2  t 2 * t 1 z  load x z  z - t 2

16 Control-flow Graph Models the transfer of control in the procedure Nodes in the graph are basic blocks  Can be represented with quads or any other linear representation Edges in the graph represent control flow Example if (x = y) a  2 b  5 a  3 b  4 c  a * b Basic blocks — Maximal length sequences of straight-line code

17 Using Multiple Representations Repeatedly lower the level of the intermediate representation  Each intermediate representation is suited towards certain optimizations Example: the Open64 compiler  WHIRL intermediate format  Consists of 5 different IRs that are progressively more detailed Front End Middle End Back End IR 1IR 3 Source Code Target Code Middle End IR 2

18 Memory Models Two major models Register-to-register model  Keep all values that can legally be stored in a register in registers  Ignore machine limitations on number of registers  Compiler back-end must insert loads and stores Memory-to-memory model  Keep all values in memory  Only promote values to registers directly before they are used  Compiler back-end can remove loads and stores Compilers for RISC machines usually use register-to-register  Reflects programming model  Easier to determine when registers are used

19 The Rest of the Story… Representing the code is only part of an IR There are other necessary components Symbol table (already discussed) Constant table  Representation, type  Storage class, offset Storage map  Overall storage layout  Overlap information  Virtual register assignments

20 The Procedure as a Control Abstraction Procedures have well-defined control-flow The Algol-60 procedure call Invoked at a call site, with some set of actual parameters Control returns to call site, immediately after invocation

21 The Procedure as a Control Abstraction Procedures have well-defined control-flow The Algol-60 procedure call Invoked at a call site, with some set of actual parameters Control returns to call site, immediately after invocation int p(a,b,c) int a, b, c; { int d; d = q(c,b);... } … s = p(10,t,u); …

22 The Procedure as a Control Abstraction Procedures have well-defined control-flow The Algol-60 procedure call Invoked at a call site, with some set of actual parameters Control returns to call site, immediately after invocation int p(a,b,c) int a, b, c; { int d; d = q(c,b);... } int q(x,y) int x,y; { return x + y; } … s = p(10,t,u); …

23 The Procedure as a Control Abstraction Procedures have well-defined control-flow The Algol-60 procedure call Invoked at a call site, with some set of actual parameters Control returns to call site, immediately after invocation int p(a,b,c) int a, b, c; { int d; d = q(c,b);... } int q(x,y) int x,y; { return x + y; } … s = p(10,t,u); …

24 The Procedure as a Control Abstraction Procedures have well-defined control-flow The Algol-60 procedure call Invoked at a call site, with some set of actual parameters Control returns to call site, immediately after invocation int p(a,b,c) int a, b, c; { int d; d = q(c,b);... } int q(x,y) int x,y; { return x + y; } … s = p(10,t,u); …

25 The Procedure as a Control Abstraction Procedures have well-defined control-flow The Algol-60 procedure call Invoked at a call site, with some set of actual parameters Control returns to call site, immediately after invocation Most languages allow recursion int p(a,b,c) int a, b, c; { int d; d = q(c,b);... } int q(x,y) int x,y; { return x + y; } … s = p(10,t,u); …

26 The Procedure as a Control Abstraction Implementing procedures with this behavior Requires code to save and restore a “return address” Must map actual parameters to formal parameters (c  x, b  y) Must create storage for local variables (&, maybe, parameters)  p needs space for d (&, maybe, a, b, & c)  where does this space go in recursive invocations? Compiler emits code that causes all this to happen at run time int p(a,b,c) int a, b, c; { int d; d = q(c,b);... } int q(x,y) int x,y; { return x + y; } … s = p(10,t,u); …

27 The Procedure as a Control Abstraction Implementing procedures with this behavior Must preserve p’s state while q executes  recursion causes the real problem here Strategy: Create unique location for each procedure activation  Can use a “stack” of memory blocks to hold local storage and return addresses Compiler emits code that causes all this to happen at run time int p(a,b,c) int a, b, c; { int d; d = q(c,b);... } int q(x,y) int x,y; { return x + y; } … s = p(10,t,u); …

28 The Procedure as a Name Space Each procedure creates its own name space Any name (almost) can be declared locally Local names obscure identical non-local names Local names cannot be seen outside the procedure  Nested procedures are “inside” by definition We call this set of rules & conventions “lexical scoping” Examples C has global, static, local, and block scopes (Fortran-like)  Blocks can be nested, procedures cannot Scheme has global, procedure-wide, and nested scopes (let)  Procedure scope (typically) contains formal parameters

29 The Procedure as a Name Space Why introduce lexical scoping? Provides a compile-time mechanism for binding “free” variables Simplifies rules for naming & resolves conflicts How can the compiler keep track of all those names? The Problem At point p, which declaration of x is current? At run-time, where is x found? As parser goes in & out of scopes, how does it delete x? The Answer Lexically scoped symbol tables (see § 5.7.3)

30 Do People Use This Stuff ? C macro from the MSCP compiler #define fix_inequality(oper, new_opcode) \ if (value0 < value1) \ { \ Unsigned_Int temp = value0; \ value0 = value1; \ value1 = temp; \ opcode_name = new_opcode; \ temp = oper->arguments[0]; \ oper->arguments[0] = oper->arguments[1]; \ oper->arguments[1] = temp; \ oper->opcode = new_opcode; \ } Declares a new name

31 Do People Use This Stuff ? C code from the MSCP implementation static Void phi_node_printer(Block *block) { Phi_Node *phi_node; Block_ForAllPhiNodes(phi_node, block) { if (phi_node->old_name < register_count) { Unsigned_Int i; fprintf(stderr, "Phi node for r%d: [", phi_node->old_name); for (i = 0; i pred_count; i++) fprintf(stderr, " r%d", phi_node->parms[i]); fprintf(stderr, " ] => r%d\n", phi_node->new_name); } else { Unsigned_Int2 *arg_ptr; fprintf(stderr, "Phi node for %s: [", Expr_Get_String(Tag_Unmap( phi_node->old_name))); Phi_Node_ForAllParms(arg_ptr, phi_node) fprintf(stderr, " %d", *arg_ptr); fprintf(stderr, " ] => %d\n", phi_node->new_name); } More local declarations!

32 Lexically-scoped Symbol Tables The problem The compiler needs a distinct record for each declaration Nested lexical scopes admit duplicate declarations The interface insert(name, level ) – creates record for name at level lookup(name, level ) – returns pointer or index delete(level ) – removes all names declared at level Many implementation schemes have been proposed (see § B.4) We’ll stay at the conceptual level Hash table implementation is tricky, detailed, & fun Symbol tables are compile-time structures the compiler use to resolve references to names. We’ll see the corresponding run-time structures that are used to establish addressability later. § 5.7 in EaC

33 Example procedure p { int a, b, c procedure q { int v, b, x, w procedure r { int x, y, z …. } procedure s { int x, a, v … } … r … s } … q … } B0: { int a, b, c B1: { int v, b, x, w B2:{ int x, y, z …. } B3:{ int x, a, v … } … } … }

34 Lexically-scoped Symbol Tables High-level idea Create a new table for each scope Chain them together for lookup “Sheaf of tables” implementation insert() may need to create table it always inserts at current level lookup() walks chain of tables & returns first occurrence of name delete() throws away table for level p, if it is top table in the chain If the compiler must preserve the table (for, say, the debugger ), this idea is actually practical. Individual tables can be hash tables. x y z v b x w a b c r q p...

35 Implementing Lexically Scoped Symbol Tables Stack organization a b c v b x w x y z growth Implementation insert () creates new level pointer if needed and inserts at nextFree lookup () searches linearly from nextFree–1 forward delete () sets nextFree to the equal the start location of the level deleted. Advantage Uses much less space Disadvantage Lookups can be expensive p (level 0) q (level 1) r (level 2) nextFree

36 Implementing Lexically Scoped Symbol Tables Threaded stack organization h(x) a b c v b x w x y z growth Implementation insert () puts new entry at the head of the list for the name lookup () goes direct to location delete () processes each element in level being deleted to remove from head of list Advantage lookup is fast Disadvantage delete takes time proportional to number of declared variables in level p q r

37 The Procedure as an External Interface OS needs a way to start the program’s execution Programmer needs a way to indicate where it begins  The “main” procedure in most languaages When user invokes “grep” at a command line  OS finds the executable  OS creates a process and arranges for it to run “grep”  “grep” is code from the compiler, linked with run-time system  Starts the run-time environment & calls “main”  After main, it shuts down run-time environment & returns When “grep” needs system services  It makes a system call, such as fopen() UNIX/Linux specific discussion

38 Where Do All These Variables Go? Automatic & Local Keep them in the procedure activation record or in a register Automatic  lifetime matches procedure’s lifetime Static Procedure scope  storage area affixed with procedure name  &_p.x File scope  storage area affixed with file name Lifetime is entire execution Global One or more named global data areas One per variable, or per file, or per program, … Lifetime is entire execution

39 Placing Run-time Data Structures Classic Organization Code, static, & global data have known size  Use symbolic labels in the code Heap & stack both grow & shrink over time This is a virtual address space Better utilization if stack & heap grow toward each other Very old result (Knuth) Code & data separate or interleaved Uses address space, not allocated memory CodeCode S G t l a & o t b i a c l StackStack HeapHeap Single Logical Address Space 0 high

40 How Does This Really Work? The Big Picture CodeCode S G t l a & o t b i a c l HeapHeap StackStack CodeCode S G t l a & o t b i a c l HeapHeap StackStack CodeCode S G t l a & o t b i a c l H e a p StackStack... Hardware’s view Compiler’s view OS’s view Physical address space_ virtual address spaces 0high CodeCode S G t l a & o t b i a c l HeapHeap StackStack

41 Where Do Local Variables Live? A Simplistic model Allocate a data area for each distinct scope One data area per “sheaf” in scoped table What about recursion? Need a data area per invocation (or activation) of a scope We call this the scope’s activation record The compiler can also store control information there ! More complex scheme One activation record ( AR ) per procedure instance All the procedure’s scopes share a single AR (may share space) Static relationship between scopes in single procedure Used this way, “static” means knowable at compile time (and, therefore, fixed).

42 Translating Local Names How does the compiler represent a specific instance of x ? Name is translated into a static coordinate  pair  “level” is lexical nesting level of the procedure  “offset” is unique within that scope Subsequent code will use the static coordinate to generate addresses and references “level” is a function of the table in which x is found  Stored in the entry for each x “offset” must be assigned and stored in the symbol table  Assigned at compile time  Known at compile time  Used to generate code that executes at run-time

43 Storage for Blocks within a Single Procedure Fixed length data can always be at a constant offset from the beginning of a procedure  In our example, the a declared at level 0 will always be the first data element, stored at byte 0 in the fixed-length data area  The x declared at level 1 will always be the sixth data item, stored at byte 20 in the fixed data area  The x declared at level 2 will always be the eighth data item, stored at byte 28 in the fixed data area  But what about the a declared in the second block at level 2? B0: { int a, b, c B1: { int v, b, x, w B2:{ int x, y, z …. } B3:{ int x, a, v … } … } … }

44 Variable-length Data Arrays  If size is fixed at compile time, store in fixed-length data area  If size is variable, store descriptor in fixed length area, with pointer to variable length area  Variable-length data area is assigned at the end of the fixed length area for block in which it is allocated B0: { int a, b … assign value to a B1: { int v(a), b, x B2:{ int x, y(8) …. } abvbxxy(8)v(a) Variable-length data Includes variable length data for all blocks in the procedure …

45 Activation Record Basics parameters register save area return value return address addressability caller’s ARP local variables ARP Space for parameters to the current routine Saved register contents If function, space for return value Address to resume caller Help with non-local access To restore caller’s AR on a return Space for local values & variables (including spills) One AR for each invocation of a procedure

46 Activation Record Details How does the compiler find the variables? They are at known offsets from the AR pointer The static coordinate leads to a “loadAI” operation  Level specifies an ARP, offset is the constant Variable-length data If AR can be extended, put it below local variables Leave a pointer at a known offset from ARP Otherwise, put variable-length data on the heap Initializing local variables Must generate explicit code to store the values Among the procedure’s first actions

47 Activation Record Details Where do activation records live? If lifetime of AR matches lifetime of invocation, AND If code normally executes a “return”  Keep AR s on a stack If a procedure can outlive its caller, OR If it can return an object that can reference its execution state  AR s must be kept in the heap If a procedure makes no calls  AR can be allocated statically Efficiency prefers static, stack, then heap CodeCode S G t l a & o t b i a c l HeapHeap StackStack Yes! This stack.

48 Communicating Between Procedures Most languages provide a parameter passing mechanism  Expression used at “call site” becomes variable in callee Two common binding mechanisms Call-by-reference passes a pointer to actual parameter  Requires slot in the AR (for address of parameter)  Multiple names with the same address? Call-by-value passes a copy of its value at time of call  Requires slot in the AR  Each name gets a unique location (may have same value)  Arrays are mostly passed by reference, not value Can always use global variables … call fee(x,x,x);

49 Establishing Addressability Must create base addresses Global & static variables  Construct a label by mangling names (i.e., &_fee) Local variables  Convert to static data coordinate and use ARP + offset Local variables of other procedures  Convert to static coordinates  Find appropriate ARP  Use that ARP + offset { Must find the right AR Need links to nameable AR s

50 Establishing Addressability Using access links Each AR has a pointer to AR of lexical ancestor Lexical ancestor need not be the caller Reference to runs up access link chain to p Cost of access is proportional to lexical distance parameters register save area return value return address access link caller’s ARP local variables A RP parameters register save area return value return address access link caller’s ARP local variables parameters register save area return value return address access link caller’s ARP local variables Some setup cost on each call

51 Establishing Addressability Using access links Access & maintenance cost varies with level All accesses are relative to ARP (r 0 ) Assume Current lexical level is 2 Access link is at ARP - 4 Maintaining access link Calling level k+1  Use current ARP as link Calling level j < k  Find ARP for j –1  Use that ARP as link

52 Establishing Addressability Using a display Global array of pointer to nameable AR s Needed ARP is an array access away Reference to looks up p’s ARP in display & adds 16 Cost of access is constant ( ARP + offset) ARP parameters register save area return value return address saved ptr. caller’s ARP local variables parameters register save area return value return address saved ptr. caller’s ARP local variables level 0 level 1 level 2 level 3 Display parameters register save area return value return address saved ptr. caller’s ARP local variables Some setup cost on each call

53 Establishing Addressability Using a display Access & maintenance costs are fixed Address of display may consume a register Assume Current lexical level is 2 Display is at label _disp Maintaining access link On entry to level j  Save level j entry into AR (Saved Ptr field)  Store ARP in level j slot On exit from level j  Restore level j entry Desired AR is at _disp + 4 x level

54 Establishing Addressability Access links versus Display Each adds some overhead to each call Access links costs vary with level of reference  Overhead only incurred on references & calls  If AR s outlive the procedure, access links still work Display costs are fixed for all references  References & calls must load display address  Typically, this requires a register (rematerialization) Your mileage will vary Depends on ratio of non-local accesses to calls Extra register can make a difference in overall speed For either scheme to work, the compiler must insert code into each procedure call & return

55 Procedure Linkages How do procedure calls actually work? At compile time, callee may not be available for inspection  Different calls may be in different compilation units  Compiler may not know system code from user code  All calls must use the same protocol Compiler must use a standard sequence of operations Enforces control & data abstractions Divides responsibility between caller & callee Usually a system-wide agreement (for interoperability)

56 Procedure Linkages Standard procedure linkage procedure p prolog epilog pre-call post-return procedure q prolog epilog Procedure has standard prolog standard epilog Each call involves a pre-call sequence post-return sequence These are completely predictable from the call site  depend on the number & type of the actual parameters

57 Procedure Linkages Pre-call Sequence Sets up callee’s basic AR Helps preserve its own environment The Details Allocate space for the callee’s AR  except space for local variables Evaluates each parameter & stores value or address Saves return address, caller’s ARP into callee’s AR If access links are used  Find appropriate lexical ancestor & copy into callee’s AR Save any caller-save registers  Save into space in caller’s AR Jump to address of callee’s prolog code

58 Procedure Linkages Post-return Sequence Finish restoring caller’s environment Place any value back where it belongs The Details Copy return value from callee’s AR, if necessary Free the callee’s AR Restore any caller-save registers Restore any call-by-reference parameters to registers, if needed  Also copy back call-by-value/result parameters Continue execution after the call

59 Procedure Linkages Prolog Code Finish setting up the callee’s environment Preserve parts of the caller’s environment that will be disturbed The Details Preserve any callee-save registers If display is being used  Save display entry for current lexical level  Store current ARP into display for current lexical level Allocate space for local data  Easiest scenario is to extend the AR Find any static data areas referenced in the callee Handle any local variable initializations With heap allocated AR, may need to use a separate heap object for local variables

60 Procedure Linkages Epilog Code Wind up the business of the callee Start restoring the caller’s environment The Details Store return value? No, this happens on the return statement Restore callee-save registers Free space for local data, if necessary (on the heap) Load return address from AR Restore caller’s ARP Jump to the return address If AR s are stack allocated, this may not be necessary. (Caller can reset stacktop to its pre-call value.)


Download ppt "Intermediate Representations. Front end - produces an intermediate representation (IR) Middle end - transforms the IR into an equivalent IR that runs."

Similar presentations


Ads by Google