March 4, 2008 DF4-1http://csg.csail.mit.edu/arvind/ From Id/pH to Dataflow Graphs Arvind Computer Science & Artificial Intelligence Lab Massachusetts Institute.

Slides:



Advertisements
Similar presentations
Dataflow Analysis for Datarace-Free Programs (ESOP 11) Arnab De Joint work with Deepak DSouza and Rupesh Nasre Indian Institute of Science, Bangalore.
Advertisements

CSC 4181 Compiler Construction Code Generation & Optimization.
Data-Flow Analysis II CS 671 March 13, CS 671 – Spring Data-Flow Analysis Gather conservative, approximate information about what a program.
Intermediate Code Generation
7. Optimization Prof. O. Nierstrasz Lecture notes by Marcus Denker.
Lecture 11: Code Optimization CS 540 George Mason University.
Compilation 2011 Static Analysis Johnni Winther Michael I. Schwartzbach Aarhus University.
Computer Architecture Lecture 7 Compiler Considerations and Optimizations.
1 Chapter 8: Code Generation. 2 Generating Instructions from Three-address Code Example: D = (A*B)+C =* A B T1 =+ T1 C T2 = T2 D.
Control-Flow Graphs & Dataflow Analysis CS153: Compilers Greg Morrisett.
The lambda calculus David Walker CS 441. the lambda calculus Originally, the lambda calculus was developed as a logic by Alonzo Church in 1932 –Church.
Chapter 10 Code Optimization. A main goal is to achieve a better performance Front End Code Gen Intermediate Code source Code target Code user Machine-
1 Compiler Construction Intermediate Code Generation.
Program Representations. Representing programs Goals.
Optimization Compiler Baojian Hua
UNIT-III By Mr. M. V. Nikum (B.E.I.T). Programming Language Lexical and Syntactic features of a programming Language are specified by its grammar Language:-
Greg MorrisettFall  Compilers.  (duh)  Translating one programming language into another.  Also interpreters.  Translating and running a language.
1 CS 201 Compiler Construction Lecture 5 Code Optimizations: Copy Propagation & Elimination.
Recap from last time Saw several examples of optimizations –Constant folding –Constant Prop –Copy Prop –Common Sub-expression Elim –Partial Redundancy.
CSCI 8150 Advanced Computer Architecture Hwang, Chapter 2 Program and Network Properties 2.3 Program Flow Mechanisms.
Cpeg421-08S/final-review1 Course Review Tom St. John.
1 Intermediate representation Goals: –encode knowledge about the program –facilitate analysis –facilitate retargeting –facilitate optimization scanning.
Lecture 23 Basic Blocks Topics Code Generation Readings: 9 April 17, 2006 CSCE 531 Compiler Construction.
Type Checking- Contd Compiler Design Lecture (03/02/98) Computer Science Rensselaer Polytechnic.
Java for High Performance Computing Jordi Garcia Almiñana 14 de Octubre de 1998 de la era post-internet.
X := 11; if (x == 11) { DoSomething(); } else { DoSomethingElse(); x := x + 1; } y := x; // value of y? Phase ordering problem Optimizations can interact.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science pH: A Parallel Dialect of Haskell Jim Cipar & Jacob Sorber University of Massachusetts.
Compiler Construction A Compulsory Module for Students in Computer Science Department Faculty of IT / Al – Al Bayt University Second Semester 2008/2009.
Lecture 1CS 380C 1 380C Last Time –Course organization –Read Backus et al. Announcements –Hadi lab Q&A Wed 1-2 in Painter 5.38N –UT Texas Learning Center:
Machine-Independent Optimizations Ⅰ CS308 Compiler Theory1.
Generative Programming. Generic vs Generative Generic Programming focuses on representing families of domain concepts Generic Programming focuses on representing.
1.3 Executing Programs. How is Computer Code Transformed into an Executable? Interpreters Compilers Hybrid systems.
Optimizing Compilers Nai-Wei Lin Department of Computer Science and Information Engineering National Chung Cheng University.
Topic #10: Optimization EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
December 10, 2009 L29-1 The Semantics of Bluespec Arvind Computer Science & Artificial Intelligence Lab Massachusetts Institute.
Fast, Effective Code Generation in a Just-In-Time Java Compiler Rejin P. James & Roshan C. Subudhi CSE Department USC, Columbia.
High Performance Architectures Dataflow Part 3. 2 Dataflow Processors Recall from Basic Processor Pipelining: Hazards limit performance  Structural hazards.
Compiler Chapter# 5 Intermediate code generation.
Advanced Compiler Design Early Optimizations. Introduction Constant expression evaluation (constant folding)  dataflow independent Scalar replacement.
1 Code optimization “Code optimization refers to the techniques used by the compiler to improve the execution efficiency of the generated object code”
1 CS 201 Compiler Construction Introduction. 2 Instructor Information Rajiv Gupta Office: WCH Room Tel: (951) Office.
Synopsys University Courseware Copyright © 2012 Synopsys, Inc. All rights reserved. Compiler Optimization and Code Generation Lecture - 1 Developed By:
Arvind Computer Science and Artificial Intelligence Laboratory M.I.T. L06-1 September 26, 2006http:// Type Inference September.
Lexical analyzer Parser Semantic analyzer Intermediate-code generator Optimizer Code Generator Postpass optimizer String of characters String of tokens.
Constructive Computer Architecture: Guards Arvind Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology September 24, 2014.
March 4, 2008http://csg.csail.mit.edu/arvindDF3-1 Dynamic Dataflow Arvind Computer Science & Artificial Intelligence Lab Massachusetts Institute of Technology.
Copyright © Curt Hill Other Trees Applications of the Tree Structure.
Compiler Construction CPCS302 Dr. Manal Abdulaziz.
Arvind Computer Science and Artificial Intelligence Laboratory M.I.T. L04Ext-1 September 21, 2006http:// Some more thoughts.
3/2/2016© Hal Perkins & UW CSES-1 CSE P 501 – Compilers Optimizing Transformations Hal Perkins Autumn 2009.
CS412/413 Introduction to Compilers and Translators April 2, 1999 Lecture 24: Introduction to Optimization.
Autumn 2006CSE P548 - Dataflow Machines1 Von Neumann Execution Model Fetch: send PC to memory transfer instruction from memory to CPU increment PC Decode.
1 Compiler Construction (CS-636) Muhammad Bilal Bashir UIIT, Rawalpindi.
Arvind Computer Science and Artificial Intelligence Laboratory M.I.T. L03-1 September 14, 2006http:// -calculus: A Basis for.
©SoftMoore ConsultingSlide 1 Code Optimization. ©SoftMoore ConsultingSlide 2 Code Optimization Code generation techniques and transformations that result.
Credible Compilation With Pointers Martin Rinard and Darko Marinov Laboratory for Computer Science Massachusetts Institute of Technology.
Artificial Intelligence and Lisp Lecture 6 LiU Course TDDC65 Autumn Semester,
Code Optimization Overview and Examples
High-level optimization Jakub Yaghob
Names and Attributes Names are a key programming language feature
Graph-Based Operational Semantics
Optimization Code Optimization ©SoftMoore Consulting.
Code Generation Part III
The Metacircular Evaluator
The Metacircular Evaluator
Code Optimization Overview and Examples Control Flow Graph
Code Generation Part III
Fall Compiler Principles Lecture 10: Loop Optimizations
Well-behaved Dataflow Graphs
Prof. Onur Mutlu Carnegie Mellon University
Presentation transcript:

March 4, 2008 DF4-1http://csg.csail.mit.edu/arvind/ From Id/pH to Dataflow Graphs Arvind Computer Science & Artificial Intelligence Lab Massachusetts Institute of Technology

March 4, 2008 DF4-2http://csg.csail.mit.edu/arvind/ Parallel Language Model Global Heap of Shared Objects Tree of Activation Frames h:g: f: loop active threads asynchronous and parallel at all levels

March 4, 2008 DF4-3http://csg.csail.mit.edu/arvind/ Dataflow Graphs + I-Structures +... TTDA Monsoon *T *T-Voyager Id World implicit parallelism Id

March 4, 2008 DF4-4http://csg.csail.mit.edu/arvind/ Id Compiler Phases  Id   Kernel Id   Parallel Three Address Code, P-TAC   Dataflow Graphs/Threads   Machine Graphs  von Neumann  code   Reduction  desugar  select representations  Fixed Program Semantics of each intermediate language is important to show the correctness of each module

March 4, 2008 DF4-5http://csg.csail.mit.edu/arvind/ Choosing Representations: Arrays lower upper lower upper bounds lower upper lower upper bounds bounds array 4.

March 4, 2008 DF4-6http://csg.csail.mit.edu/arvind/ type list *0 = nil | cons *0 (list *0) 1. nil tag=0 tag=1 cons 2. nil cons tag Boxed Unboxed Implicit Assumption: Pointers can be distinguished from small integers Similar issues arise for all Algebraic Types Choosing Representations: Lists

March 4, 2008 DF4-7http://csg.csail.mit.edu/arvind/ Case Expression and Tags type foo.. = C1... | C2... |... | Cn... The case expression for type foo foo-case(d,e 1,...,e n ) is implemented using the case construct: case k (i,e 1,...,e k )e i where i = ifetch(d);

March 4, 2008 DF4-8http://csg.csail.mit.edu/arvind/ Top Level Functions f = k (x 1,...,x k ).e Do not contain any free variables except the names of other top level functions. In general, its arguments can be supplied in one of two ways: 1. As a data structure which is built incrementally F = xs.{ (x 1,...,x k ) = destruct(xs); in e } 2. All at once - fast call F fc = k (x 1,...,x k ).e

March 4, 2008 DF4-9http://csg.csail.mit.edu/arvind/ Multi-arity Functions def f x 1... x k = e can be translated as f = x 1.x x k.e However, most implementation use k-arity s f = k (x 1,...,x k ).e with the rules: ( k (x 1,...,x k ).e) a  { x’ 1 = a in (( k (x 1,...,x k ).e) x’ 1 )} if k > 1 (( k (x 1,...,x k ).e) x’ 1... x’ n ) a  { x’ n+1 = a in (( k (x 1,...,x k ).e) x’ 1... x’ n+1 )} if n < k-1 (( k (x 1,...,x k ).e) x’ 1... x’ n ) a  { x’ n+1 = a in e[x’ 1 /x 1 ]... [x’ k /x k ]} if n = k-1 Partial applications are values and thus substitutable

March 4, 2008 DF4-10http://csg.csail.mit.edu/arvind/ Loop Translation TE[{while e p do S finally e f } = { p= k (x 1,...,x k ).TE[e p ] ; b= k (x 1,...,x k ). { TS[S] in nextx 1,...,nextx k }; t p = p x ; t = Wloop k (p,b,x,t p ) ; t f = TE[e f ][t/x] ; in t f } where x = (x 1,...,x k ) and x i ’s are the “nextified” variables of S and “next x” is replaced by “nextx” everywhere

March 4, 2008 DF4-11http://csg.csail.mit.edu/arvind/ Loop Rules Wloop k (p, b, x, true)  { t = b x; t p = p t; t’ = Wloop k (p, b, t, t p ) in t’ } Wloop k (p, b, x, false) x

March 4, 2008 DF4-12http://csg.csail.mit.edu/arvind/ KId - The Kernel Id Language An extended S -calculus E ::= x | k (x 1,...,x k ).E | SE SE | { S in SE } | Case k (SE, E 1,...,E k ) | PF k (SE 1,...,SE k ) | CN 0 | CN k (SE 1,...,SE k ) | allocate() | Wloop(E p,E b,x,bool) S ::= x = E | S; S | S --- S | store(SE,SE,SE) PF 2 ::=...| i-fetch | m-fetch Simple Expressions SE ::= x | CN 0 | CN k (SE 1,...,SE k ) | k (x 1,...,x k ).E | (( k (x 1,...,x k ).E) SE 1...SE n ) n < k

March 4, 2008 DF4-13http://csg.csail.mit.edu/arvind/ KId Optimizations “Totally correct” optimizations ( S rewrite rules) Constant Propagation/Folding Fetch Elimination Barrier discharge Inline Substitution/Specialization Dead-code Elimination /Garbage collection... Additional transformations or rewrite rules Algebraic Identities Common Subexpression Elimination Code Hoisting out of Conditionals Loop Optimizations Peeling/Unrolling Lifting loop invariants Eliminating circulating variables and constants Induction variable analysis

March 4, 2008 DF4-14http://csg.csail.mit.edu/arvind/ P-TAC: Three Address Code A lower level language than KId 1. Exposes all address calculations. Requires choosing representations for all data structures 2. No nested function definitions - -abstractions exist only at the top level 3. Partial applications are represented as data structures (Closures)

March 4, 2008 DF4-15http://csg.csail.mit.edu/arvind/ Generating Dataflow Graphs GE :: P-TAC Expression --> DFG... tg x 1 x n sg G

March 4, 2008 DF4-16http://csg.csail.mit.edu/arvind/ P-TAC to Dataflow Graphs Expressions apply x 1 x 2 GE[c] = c GE[x 1 x 2 ] = GE[PF k (x 1,...,x k )] = PF k x 1... x k GE[x] = x

March 4, 2008 DF4-17http://csg.csail.mit.edu/arvind/ Case Schema GE[Case k (i, e 1,...,e k )] 1 k GE[e 1 ]GE[e k ] Switch i x 1... x n... x 1... x n { x 1,...,x n } = FV(e 1 )U...U FV(e k ) All e i ’s must have the same number of outputs Unused x i ’s in each branch have to be collected for signal generation

March 4, 2008 DF4-18http://csg.csail.mit.edu/arvind/ Graph for Function Application The Full Application case t (( k (x 1,...,x k ).e) x’ 1... x’ k-1 ) a  { x’ k = a in e[x’ 1 /x 1 ]... [x’ k /x k ] a new copy of e change tag a apply f x t: f get context extract tag change Tag0 change Tag1 a change Tag 1: 0: GE[e]  Apply is a conditional schema; make-closure part is not shown

March 4, 2008 DF4-19http://csg.csail.mit.edu/arvind/ Loop Schema GE[Wloop k (p,b,(x 1,...,x k ),tp] = next GS[S b ] GE[e p ] x 1... x k tp next {x 1,...,x n } are the nextified variables p = k (x 1...x k ).e p b = k (x 1...x k ).{S b in next x 1,...,next x k }...

March 4, 2008 DF4-20http://csg.csail.mit.edu/arvind/ P-TAC to Dataflow Graphs The Block Expression GE[{S in x}] = where {x 1,...,x n } = FV(S) GS[x = e] = where {x 1,...,x n } = FV(e)... tg x 1 x n sg GE(e) x GS[store(x 1,x 2 )] = x 1 x 2 sg istore... tg x 1 x n y 1 y m GS(S) x sg

March 4, 2008 DF4-21http://csg.csail.mit.edu/arvind/ Parallel Composition ''Wiring Diagrams'' {y 1,...,y n } = BV(S 1 ) U BV(S 2 ) {x 1,...,x n } = FV(S 1 ) U FV(S 2 ) - {y 1,...,y n } GS[S 1 ;S 2 ] =... tg x 11 x 1n y 11 y 1m sg GS(S 1 )... tg x 21 x 2n y 21 y 2m sg GS(S 2 ) x 1... x n y 1... y n Id Each x ij is to be connected to the y ij with the same name

March 4, 2008 DF4-22http://csg.csail.mit.edu/arvind/ Sequential Composition {x 1,...,x n } = FV(S 1 ) U FV(S 2 ) {y 1,...,y n } = BV(S 1 ) U BV(S 2 )... tg x 11 x 1n y 11 y 1m sg GS(S 1 ) GS[S 1 ---S 2 ] =... tg x 21 x 2n y 21 y 2m sg GS(S 2 ) x 1... x n y 1... y n Each x 2j is to be connected to the y 1j with the same name