Download presentation

Presentation is loading. Please wait.

Published byJakob Gott Modified about 1 year ago

1
1 Chapter 8: Code Generation

2
2 Generating Instructions from Three-address Code Example: D = (A*B)+C =* A B T1 =+ T1 C T2 = T2 D

3
3 Skeletons =+ => Load R1 0 (first parameter) Add R1 1 (2nd parameter) Stor R1 2 (3rd parameter) =* => Load R1 0 (first parameter) Mul R1 1 (2nd parameter) Stor R1 2 (3rd parameter) = => Load R1 0 (first parameter) Stor R1 2 (3rd parameter)

4
4 Thus we get: Load R1 A Mul R1 B Stor R1 T1 Load R1 T1 Add R1 C Stor R1 T2 Load R1 T2 Stor R1 D There are 8 instructions.

5
5 Additional skeleton (pseudo-operators) FX : fetch if not already available Code generator must remember what is in each register. Thus we have =+ => FX R1 0 (first parameter) =* => FX R1 0 Add R1 1 (2nd parameter) Mul R1 1 Stor R1 2 (3rd parameter) Stor R1 2 = => FX R1 0 Stor R1 2

6
6 Thus we get Load R1 A Mul R1 B Stor R1 T1 Add R1 C Stor R1 T2 Stor R1 D There are 6 instructions

7
7 Another Pseudo Operator SX SX - store if not used right away (or if it is not a temporary variable (e.g., T1, T2, etc.)) Thus we have: =+ => FX R1 0 (first parameter) =* => FX R1 0 Add R1 1 (2nd parameter) Mul R1 1 SX R1 2 (3rd parameter) SX R1 2 = => FX R1 0 SX R1 2

8
8 Thus we finally get: Load R1 A Mul R1 B Add R1 C Stor R1 D There are 4 instructions.

9
9

10
10

11
11 Optimization The running time we expect to save over the expected number of runs of the optimized object program must exceed the time spend by the compiler doing the optimization.

12
Peephole Optimization - A simple but effective technique for locally improving the target code is peephole optimization, a method for trying to improve the performance of the target program by examining a short sequence of target instruction (called the peephole) and replacing these instructions by a shorter or faster sequence whenever possible. Techniques included: redundant instruction elimination flow-of-control optimization algebraic simplification use of machine idioms

13
13 Sources for optimization: 1. register allocation 2. handling of inner loops - removal of loop-invariant computation - elimination of induction variable 3. identification of common subexpressions e.g. A[I+1] = B[I+1] ==> J = I + 1; A[J] = B[J] 4. replacement of run-time computations by compile-time computations (substitution of values for names whose values are constant ==> constant folding)

14
14 Suitable generation of temporary variables - temporary variables can be put in the symbol table (suitable for optimizing compilers) or can be avoided entering into symbol table (Use triples to save intermediate code). Only the type of a temporary variable need be recorded. The type can be stored along with the three- address statements that use them.

15
15 Example: D = A*B+C Three Address Code: T1 = A * B T2 = T1 + C D = T2 In quadruples: index Operator Arg1 Arg2 Result (0) =* A B T1 (1) =+ T1 C T2 (2) = T2 D

16
16 index Operator Arg1 Arg2 (0) =* A B (1) =+ (0) C (2) = (1) D - Temporary names must be distinct from any name the programmer might use, such as beginning with ‘$’. - The scope of a name is the portion of the program between its definition and last use. We can replace two temporaries by one name if their scopes do not overlap. In triples

17
17 Algorithm to compute temporary names $c /* c is a positive integer, 0 */ 1. Keep a count c, initialized to zero. 2. Whenever we use a temporary name as an operand, decrement c by 1. Whenever we generate a new temporary name, use $c and increment c by 1.

18
18 An Example: X = A * B – C * D + E / F statement value of c 0 //initial value of c $0 = A * B 1 $1 = C * D 2 $0 = $0 - $1 1 $1 = E / F 2 $0 = $0 + $1 1 X = $0 0

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google