Compiler Construction Sohail Aslam Lecture 35. 2 IR Taxonomy IRs fall into three organizational categories 1.Graphical IRs encode the compiler’s knowledge.

Slides:



Advertisements
Similar presentations
1 Lecture 10 Intermediate Representations. 2 front end »produces an intermediate representation (IR) for the program. optimizer »transforms the code in.
Advertisements

8. Code Generation. Generate executable code for a target machine that is a faithful representation of the semantics of the source code Depends not only.
Chapter 8 ICS 412. Code Generation Final phase of a compiler construction. It generates executable code for a target machine. A compiler may instead generate.
Intermediate Representations Saumya Debray Dept. of Computer Science The University of Arizona Tucson, AZ
8 Intermediate code generation
1 Compiler Construction Intermediate Code Generation.
Chapter 5 Syntax Directed Translation. Outline Syntax Directed Definitions Evaluation Orders of SDD’s Applications of Syntax Directed Translation Syntax.
Semantic analysis Parsing only verifies that the program consists of tokens arranged in a syntactically-valid combination, we now move on to semantic analysis,
Intermediate Representation I High-Level to Low-Level IR Translation EECS 483 – Lecture 17 University of Michigan Monday, November 6, 2006.
Intermediate Representations Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University.
10/22/2002© 2002 Hal Perkins & UW CSEG-1 CSE 582 – Compilers Intermediate Representations Hal Perkins Autumn 2002.
Intermediate Code Generation Professor Yihjia Tsai Tamkang University.
CS 536 Spring Intermediate Code. Local Optimizations. Lecture 22.
1 Semantic Processing. 2 Contents Introduction Introduction A Simple Compiler A Simple Compiler Scanning – Theory and Practice Scanning – Theory and Practice.
1 Intermediate representation Goals: –encode knowledge about the program –facilitate analysis –facilitate retargeting –facilitate optimization scanning.
Intermediate Code CS 471 October 29, CS 471 – Fall Intermediate Code Generation Source code Lexical Analysis Syntactic Analysis Semantic.
1 Intermediate representation Goals: encode knowledge about the program facilitate analysis facilitate retargeting facilitate optimization scanning parsing.
Syntax-Directed Translation Context-free grammar with synthesized and/or inherited attributes. The showing of values at nodes of a parse tree is called.
CSC 8505 Compiler Construction Intermediate Representations.
1.3 Executing Programs. How is Computer Code Transformed into an Executable? Interpreters Compilers Hybrid systems.
Lecture 2 Phases of Compiler. Preprocessors, Compilers, Assemblers, and Linkers Preprocessor Compiler Assembler Linker Skeletal Source Program Source.
2.2 A Simple Syntax-Directed Translator Syntax-Directed Translation 2.4 Parsing 2.5 A Translator for Simple Expressions 2.6 Lexical Analysis.
CSE P501 – Compiler Construction Parser Semantic Actions Intermediate Representations AST Linear Next Spring 2014 Jim Hogg - UW - CSE - P501G-1.
10/1/2015© Hal Perkins & UW CSEG-1 CSE P 501 – Compilers Intermediate Representations Hal Perkins Autumn 2009.
1 October 1, October 1, 2015October 1, 2015October 1, 2015 Azusa, CA Sheldon X. Liang Ph. D. Computer Science at Azusa Pacific University Azusa.
1 Structure of a Compiler Front end of a compiler is efficient and can be automated Back end is generally hard to automate and finding the optimum solution.
COP 4620 / 5625 Programming Language Translation / Compiler Writing Fall 2003 Lecture 10, 10/30/2003 Prof. Roy Levow.
CS412/413 Introduction to Compilers and Translators May 3, 1999 Lecture 34: Compiler-like Systems JIT bytecode interpreter src-to-src translator bytecode.
Compiler course 1. Introduction. Outline Scope of the course Disciplines involved in it Abstract view for a compiler Front-end and back-end tasks Modules.
1 COMP 3438 – Part II-Lecture 1: Overview of Compiler Design Dr. Zili Shao Department of Computing The Hong Kong Polytechnic Univ.
Compiler Chapter# 5 Intermediate code generation.
Chapter 8: Intermediate Code Generation
Chapter 7 Syntax-Directed Compilation (AST & Target Code) 1.
Unit-1 Introduction Prepared by: Prof. Harish I Rathod
1 June 3, June 3, 2016June 3, 2016June 3, 2016 Azusa, CA Sheldon X. Liang Ph. D. Computer Science at Azusa Pacific University Azusa Pacific University,
COP 4620 / 5625 Programming Language Translation / Compiler Writing Fall 2003 Lecture 1, 08/28/03 Prof. Roy Levow.
Joey Paquet, 2000, Lecture 10 Introduction to Code Generation and Intermediate Representations.
Introduction to Compilers. Related Area Programming languages Machine architecture Language theory Algorithms Data structures Operating systems Software.
Topic #1: Introduction EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
Introduction to Code Generation and Intermediate Representations
1 Compiler Design (40-414)  Main Text Book: Compilers: Principles, Techniques & Tools, 2 nd ed., Aho, Lam, Sethi, and Ullman, 2007  Evaluation:  Midterm.
Chapter 1 Introduction Study Goals: Master: the phases of a compiler Understand: what is a compiler Know: interpreter,compiler structure.
Chapter 5. Syntax-Directed Translation. 2 Fig Syntax-directed definition of a simple desk calculator ProductionSemantic Rules L  E n print ( E.val.
Code Generation Ⅰ CS308 Compiler Theory1. 2 Background The final phase in our compiler model Requirements imposed on a code generator –Preserving the.
1 Compiler Construction (CS-636) Muhammad Bilal Bashir UIIT, Rawalpindi.
Chapter# 6 Code generation.  The final phase in our compiler model is the code generator.  It takes as input the intermediate representation(IR) produced.
Intermediate Code Representations
Compiler Introduction 1 Kavita Patel. Outlines 2  1.1 What Do Compilers Do?  1.2 The Structure of a Compiler  1.3 Compilation Process  1.4 Phases.
12/18/2015© Hal Perkins & UW CSEG-1 CSE P 501 – Compilers Intermediate Representations Hal Perkins Winter 2008.
1 Compiler & its Phases Krishan Kumar Asstt. Prof. (CSE) BPRCE, Gohana.
CSC 4181 Compiler Construction
1 Asstt. Prof Navjot Kaur Computer Dept PRESENTED BY.
CS 404Ahmed Ezzat 1 CS 404 Introduction to Compiler Design Lecture 10 Ahmed Ezzat.
1 Compiler Construction (CS-636) Muhammad Bilal Bashir UIIT, Rawalpindi.
CS 536 © CS 536 Spring Introduction to Programming Languages and Compilers Charles N. Fischer Lecture 15.
Lecture 12 Intermediate Code Generation Translating Expressions
CS 404 Introduction to Compiler Design
Compiler Design (40-414) Main Text Book:
Syntax-Directed Translation
Compilers Principles, Techniques, & Tools Taught by Jing Zhang
Ch. 4 – Semantic Analysis Errors can arise in syntax, static semantics, dynamic semantics Some PL features are impossible or infeasible to specify in grammar.
Dept of Computer Science
Intermediate Representations Hal Perkins Autumn 2011
Intermediate Representations
CSc 453 Interpreters & Interpretation
Compiler Construction
Intermediate Representations
Compilers Principles, Techniques, & Tools Taught by Jing Zhang
Intermediate Representations Hal Perkins Autumn 2005
CSc 453 Interpreters & Interpretation
Presentation transcript:

Compiler Construction Sohail Aslam Lecture 35

2 IR Taxonomy IRs fall into three organizational categories 1.Graphical IRs encode the compiler’s knowledge in a graph.

3 IR Taxonomy 2.Linear IRs resemble pseudocode for some abstract machine 3.Hybrid IRs combine elements of both graphical (structural) and linear IRs

4 IR Taxonomy 2.Linear IRs resemble pseudocode for some abstract machine 3.Hybrid IRs combine elements of both graphical (structural) and linear IRs

5 Graphical IRs  Parse trees are graphs that represent source-code form of the program  The structure of the tree corresponds to the syntax of the source code

6 Graphical IRs  Parse trees are graphs that represent source-code form of the program  The structure of the tree corresponds to the syntax of the source code

7 Graphical IRs  Parse trees are used primarily in discussion of parsing and in attribute grammar systems where they are the primary IR  In most other applications, compilers use one of the more concise alternatives

8 Graphical IRs  Parse trees are used primarily in discussion of parsing and in attribute grammar systems where they are the primary IR  In most other applications, compilers use one of the more concise alternatives

9 Graphical IRs  Abstract Syntax Trees (AST) retains the essential structure of the parse tree but eliminates extraneous nodes

10 Graphical IRs AST: a = b*-c + b*-c = a + * b- c * b- c

11 Graphical IRs ASTs have been used in many practical compiler systems Source-to-source systems automatic parallelization tools pretty-printing

12 Graphical IRs ASTs have been used in many practical compiler systems Source-to-source systems automatic parallelization tools pretty-printing

13 Graphical IRs ASTs have been used in many practical compiler systems Source-to-source systems automatic parallelization tools pretty-printing

14 Graphical IRs ASTs have been used in many practical compiler systems Source-to-source systems automatic parallelization tools pretty-printing

15 Graphical IRs  AST is more concise than a parse tree  It faithfully retains the structure of the original source code  Consider the AST for x*2+x*2*y

16 Graphical IRs  AST is more concise than a parse tree  It faithfully retains the structure of the original source code  Consider the AST for x*2+x*2*y

17 Graphical IRs  AST is more concise than a parse tree  It faithfully retains the structure of the original source code  Consider the AST for x*2+x*2*y

18 Graphical IRs AST contains two distinct copies of x*2 + * x 2 * y * 2 x

19 Graphical IRs A directed acyclic graph (DAG) is a contraction of the AST that avoids duplication + * y * 2 x

20 Graphical IRs If the value of x does not change between uses of x*2, the compiler can generate code that evaluates the subtree once and uses the result twice + * y * 2 x

21 Graphical IRs  The task of building AST fits neatly into an ad hoc-syntax- directed translation scheme  Assume that the compiler has routines mknode and mkleaf for creating tree nodes

22 Graphical IRs  The task of building AST fits neatly into an ad hoc-syntax- directed translation scheme  Assume that the compiler has routines mknode and mkleaf for creating tree nodes

23 ProductionSemantic Rule E → E 1 + E 2 E.nptr = mknode(‘+’, E 1.nptr, E 2.nptr) E → E 1  E 2 E.nptr = mknode(‘  ’, E 1.nptr, E 2.nptr) E → – E 1 E.nptr = mknode(‘–’, E 1.nptr) E → ( E 1 )E.nptr = E 1.nptr E → numE.nptr = mkleaf(‘num’, num.val)

24 ProductionSemantic Rule (yacc) E → E 1 + E 2 $$.nptr = mknode(‘+’, $1.nptr, $3.nptr) E → E 1  E 2 $$.nptr = mknode(‘  ’, $1.nptr, $3.nptr) E → – E 1 $$.nptr = mknode(‘–’, $1.nptr) E → ( E 1 )$$.nptr = $1.nptr E → num$$.nptr = mkleaf(‘num’, $1.val)

25 Intermediate Languages  We will use another IR, called three-address code, for actual code generation  The semantic rules for generating three-address code for common programming languages constructs are similar to those for AST.

26 Intermediate Languages  We will use another IR, called three-address code, for actual code generation  The semantic rules for generating three-address code for common programming languages constructs are similar to those for AST.

27 Linear IRs  The alternative to graphical IR is a linear IR  An assembly-language program is a form of linear code  It consists of a sequence of instructions that execute in order of appearence

28 Linear IRs  The alternative to graphical IR is a linear IR  An assembly-language program is a form of linear code  It consists of a sequence of instructions that execute in order of appearence

29 Linear IRs  The alternative to graphical IR is a linear IR  An assembly-language program is a form of linear code  It consists of a sequence of instructions that execute in order of appearence

30 Linear IRs Two linear IRs used in modern compilers are stack-machine code three-address code

31 Linear IRs Linear IR for x – 2  y stack-machinethree-address push 2t 1 ← 2 push yt 2 ← y multiply t 3 ← t 1  t 2 push xt 4 ← x subtract t 5 ← t 4 – t 1

32 Linear IRs Linear IR for x – 2  y stack-machinethree-address push 2t 1 ← 2 push yt 2 ← y multiply t 3 ← t 1  t 2 push xt 4 ← x subtract t 5 ← t 4 – t 1

33 Stack-Machine Code  Stack-machine code is sometimes called one-address code  It assumes the presence of an operand stack

34 Stack-Machine Code  Most operations take their operands from the stack and push results back onto the stack  Stack-machine code is compact; eliminates many names from IR  This shrinks the program in IR form

35 Stack-Machine Code  Most operations take their operands from the stack and push results back onto the stack  Stack-machine code is compact; eliminates many names from IR  This shrinks the program in IR form

36 Stack-Machine Code  All results and arguments are transitory unless explicitly moved to memory  Stack-machine code is simple to generate and execute

37 Stack-Machine Code  All results and arguments are transitory unless explicitly moved to memory  Stack-machine code is simple to generate and execute

38 Stack-Machine Code  Smalltalk-80 and Java use bytecodes which are abstract stack-machine code  The bytecode is either interpreted or translated into target machine code (JIT)

39 Stack-Machine Code  Smalltalk-80 and Java use bytecodes which are abstract stack-machine code  The bytecode is either interpreted or translated into target machine code (JIT)

40 Three-Address Code Three-address code most operations have the form x ← y op z with an operator ( op ), two operands ( y and z ) and one result ( x )

41 Three-Address Code  Some operators, such as an immediate load and a jump, will need fewer arguments