Announcements & Reading Material

Slides:



Advertisements
Similar presentations
1 SSA review Each definition has a unique name Each use refers to a single definition The compiler inserts  -functions at points where different control.
Advertisements

Data-Flow Analysis II CS 671 March 13, CS 671 – Spring Data-Flow Analysis Gather conservative, approximate information about what a program.
Chapter 9 Code optimization Section 0 overview 1.Position of code optimizer 2.Purpose of code optimizer to get better efficiency –Run faster –Take less.
Course Outline Traditional Static Program Analysis –Theory Compiler Optimizations; Control Flow Graphs Data-flow Analysis – today’s class –Classic analyses.
U NIVERSITY OF D ELAWARE C OMPUTER & I NFORMATION S CIENCES D EPARTMENT Optimizing Compilers CISC 673 Spring 2011 More Control Flow John Cavazos University.
1 Code Optimization. 2 The Code Optimizer Control flow analysis: control flow graph Data-flow analysis Transformations Front end Code generator Code optimizer.
1 CS 201 Compiler Construction Lecture 7 Code Optimizations: Partial Redundancy Elimination.
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) Partial Redundancy Elimination Guo, Yao.
1 CS 201 Compiler Construction Lecture 2 Control Flow Analysis.
EECS 583 – Class 6 Dataflow Analysis University of Michigan September 26, 2011.
Jeffrey D. Ullman Stanford University Flow Graph Theory.
1 CS 201 Compiler Construction Lecture 2 Control Flow Analysis.
U NIVERSITY OF M ASSACHUSETTS, A MHERST D EPARTMENT OF C OMPUTER S CIENCE Advanced Compilers CMPSCI 710 Spring 2003 Lecture 2 Emery Berger University of.
1 Intermediate representation Goals: –encode knowledge about the program –facilitate analysis –facilitate retargeting –facilitate optimization scanning.
PSUCS322 HM 1 Languages and Compiler Design II Basic Blocks Material provided by Prof. Jingke Li Stolen with pride and modified by Herb Mayer PSU Spring.
ECE1724F Compiler Primer Sept. 18, 2002.
1 Intermediate representation Goals: encode knowledge about the program facilitate analysis facilitate retargeting facilitate optimization scanning parsing.
1 CS 201 Compiler Construction Lecture 1 Introduction.
2015/6/24\course\cpeg421-10F\Topic1-b.ppt1 Topic 1b: Flow Analysis Some slides come from Prof. J. N. Amaral
CS 412/413 Spring 2007Introduction to Compilers1 Lecture 29: Control Flow Analysis 9 Apr 07 CS412/413 Introduction to Compilers Tim Teitelbaum.
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) Loops Guo, Yao.
Prof. Bodik CS 164 Lecture 16, Fall Global Optimization Lecture 16.
1 Region-Based Data Flow Analysis. 2 Loops Loops in programs deserve special treatment Because programs spend most of their time executing loops, improving.
1 Code Optimization Chapter 9 (1 st ed. Ch.10) COP5621 Compiler Construction Copyright Robert van Engelen, Florida State University,
1 CS 201 Compiler Construction Introduction. 2 Instructor Information Rajiv Gupta Office: WCH Room Tel: (951) Office.
Cleaning up the CFG Eliminating useless nodes & edges C OMP 512 Rice University Houston, Texas Fall 2003 Copyright 2003, Keith D. Cooper & Linda Torczon,
Lecture 3: Uninformed Search
Advanced Compiler Techniques LIU Xianhua School of EECS, Peking University Loops.
1 Control Flow Analysis Topic today Representation and Analysis Paper (Sections 1, 2) For next class: Read Representation and Analysis Paper (Section 3)
EECS 583 – Class 6 Dataflow Analysis University of Michigan September 24, 2012.
CS 614: Theory and Construction of Compilers Lecture 15 Fall 2003 Department of Computer Science University of Alabama Joel Jones.
CS412/413 Introduction to Compilers Radu Rugina Lecture 18: Control Flow Graphs 29 Feb 02.
1 Control Flow Graphs. 2 Optimizations Code transformations to improve program –Mainly: improve execution time –Also: reduce program size Can be done.
Control Flow Analysis Compiler Baojian Hua
1 CS 201 Compiler Construction Lecture 2 Control Flow Analysis.
EECS 583 – Class 8 Classic Optimization University of Michigan October 3, 2011.
EECS 583 – Class 5 Hyperblocks, Control Height Reduction University of Michigan September 21, 2011.
EECS 583 – Class 5 Hyperblocks, Control Height Reduction University of Michigan September 19, 2012.
1 Code Optimization Chapter 9 (1 st ed. Ch.10) COP5621 Compiler Construction Copyright Robert van Engelen, Florida State University,
Loops Simone Campanoni
Basic Program Analysis
CS 201 Compiler Construction
Optimizing Compilers Background
EECS 583 – Class 2 Control Flow Analysis
Princeton University Spring 2016
CS 201 Compiler Construction
Taken largely from University of Delaware Compiler Notes
Control Flow Analysis CS 4501 Baishakhi Ray.
Code Optimization Chapter 10
Code Optimization Chapter 9 (1st ed. Ch.10)
Optimizing Compilers CISC 673 Spring 2009 More Control Flow
CS 201 Compiler Construction
Topic 4: Flow Analysis Some slides come from Prof. J. N. Amaral
Code Optimization Overview and Examples Control Flow Graph
EECS 583 – Class 3 Region Formation, Predicated Execution
Optimizations using SSA
Control Flow Analysis (Chapter 7)
Fall 2018, University of Michigan September 5, 2018
EECS 583 – Class 3 Region Formation, Predicated Execution
EECS 583 – Class 7 Static Single Assignment Form
Static Single Assignment
EECS 583 – Class 4 If-conversion
EECS 583 – Class 2 Control Flow Analysis
EECS 583 – Class 4 If-conversion
EECS 583 – Class 5 Dataflow Analysis
EECS 583 – Class 7 Static Single Assignment Form
Taken largely from University of Delaware Compiler Notes
EECS 583 – Class 4 If-conversion
EECS 583 – Class 3 Region Formation, Predicated Execution
CSE P 501 – Compilers SSA Hal Perkins Autumn /31/2019
Presentation transcript:

EECS 583 – Class 2 Control Flow Analysis University of Michigan September 12, 2011

Announcements & Reading Material Daya’s office hours held in 1695 CSE not 1620 CSE! Tue/Thu/Fri: 3-5pm See reading list on course webpage Will add to it each week Topic 1 – Control flow analysis and optimization Today’s class Ch 9.4, 10.4 from Red Dragon book (Aho, Sethi, Ullman) Next class “Trace Selection for Compiling Large C Applications to Microcode”, Chang and Hwu, MICRO-21, 1988. “The Superblock: An Effective Technique for VLIW and Superscalar Compilation”, Hwu et al., Journal of Supercomputing, 1993

Homework 1 Available on course website http://www.eecs.umich.edu/~mahlke/courses/583f11/homeworks.html ~2 weeks to do – so get started soon! Due week from Friday (Sept 23) 3 4-core Linux machines available for class use Andrew, hugo, and wilma Need EECS account to access – See me if you do not have one Part 1 – Download and build llvm See http://www.llvm.org Part 2 - Collect some opcode, branch, and memory statistics Must figure out how to profile an application And extract profile info

Homework 1 - Notes llvm-gcc already installed on hurricane machines /y/583f11/llvm-gcc-install Add /y/583f11/llvm-gcc-install/bin to your path so you can use the llvm-gcc executable If you want to use hurricane machines Create a directory for yourself on 1 of the machines cd /y/students mkdir uniquename chmod 700 uniquename cd uniquename Don’t accidently put your llvm files in the /y/students directory!! Then just download and build llvm backend If you use your other machines Then need to build both llvm-gcc and backend

From Last Time: BB and CFG Basic block – a sequence of consecutive operations in which flow of control enters at the beginning and leaves at the end without halt or possibility of branching except at the end Control Flow Graph – Directed graph, G = (V,E) where each vertex V is a basic block and there is an edge E, v1 (BB1)  v2 (BB2) if BB2 can immediately follow BB1 in some execution sequence Entry x = y+1; if (c) x++; else x--;; y = z + 1; if (a) y++; y—; z++; BB1 BB2 BB3 BB4 BB5 BB6 BB7 Exit

From Last Time: Dominator (DOM) Defn: Dominator – Given a CFG(V, E, Entry, Exit), a node x dominates a node y, if every path from the Entry block to y contains x 3 properties of dominators Each BB dominates itself If x dominates y, and y dominates z, then x dominates z If x dominates z and y dominates z, then either x dominates y or y dominates x Intuition Given some BB, which blocks are guaranteed to have executed prior to executing the BB

Dominator Analysis Compute dom(BBi) = set of BBs that dominate BBi Initialization Dom(entry) = entry Dom(everything else) = all nodes Iterative computation while change, do change = false for each BB (except the entry BB) tmp(BB) = BB + {intersect of Dom of all predecessor BB’s} if (tmp(BB) != dom(BB)) dom(BB) = tmp(BB) change = true Entry BB1 BB2 BB3 BB4 BB5 BB6 BB7 Exit

Immediate Dominator Defn: Immediate dominator (idom)– Each node n has a unique immediate dominator m that is the last dominator of n on any path from the initial node to n Closest node that dominates Entry BB1 BB2 BB3 BB4 BB5 BB6 BB7 Exit

Dominator Tree First BB is the root node, each node dominates all of its descendants BB DOM 1 1 2 1,2 3 1,3 4 1,4 BB DOM 5 1,4,5 6 1,4,6 7 1,4,7 BB1 BB2 BB3 BB4 BB1 BB5 BB6 BB2 BB3 BB4 BB5 BB6 BB7 BB7 Dom tree

Class Problem Draw the dominator tree for the following CFG Entry BB1 Exit

Post Dominator (PDOM) Reverse of dominator Defn: Post Dominator – Given a CFG(V, E, Entry, Exit), a node x post dominates a node y, if every path from y to the Exit contains x Intuition Given some BB, which blocks are guaranteed to have executed after executing the BB pdom(BBi) = set of BBs that post dominate BBi Initialization Pdom(exit) = exit Pdom(everything else) = all nodes Iterative computation while change, do change = false for each BB (except the exit BB) tmp(BB) = BB + {intersect of pdom of all successor BB’s} if (tmp(BB) != pdom(BB)) pdom(BB) = tmp(BB) change = true

Post Dominator Examples Entry BB1 Entry BB2 BB1 BB3 BB2 BB3 BB4 BB5 BB4 Exit BB6 BB7 Exit

Immediate Post Dominator Defn: Immediate post dominator (ipdom) – Each node n has a unique immediate post dominator m that is the first post dominator of n on any path from n to the Exit Closest node that post dominates First breadth-first successor that post dominates a node Entry BB1 BB2 BB3 BB4 BB5 BB6 BB7 Exit

Why Do We Care About Dominators? Loop detection – next subject Dominator Guaranteed to execute before Redundant computation – an op is redundant if it is computed in a dominating BB Most global optimizations use dominance info Post dominator Guaranteed to execute after Make a guess (ie 2 pointers do not point to the same locn) Check they really do not point to one another in the post dominating BB Entry BB1 BB2 BB3 BB4 BB5 BB6 BB7 Exit

Natural Loops Cycle suitable for optimization 2 properties Discuss optimizations later 2 properties Single entry point called the header Header dominates all blocks in the loop Must be one way to iterate the loop (ie at least 1 path back to the header from within the loop) called a backedge Backedge detection Edge, x y where the target (y) dominates the source (x)

Backedge Example Entry BB1 BB2 BB3 BB4 BB5 BB6 Exit

Loop Detection Identify all backedges using Dom info Each backedge (x  y) defines a loop Loop header is the backedge target (y) Loop BB – basic blocks that comprise the loop All predecessor blocks of x for which control can reach x without going through y are in the loop Merge loops with the same header I.e., a loop with 2 continues LoopBackedge = LoopBackedge1 + LoopBackedge2 LoopBB = LoopBB1 + LoopBB2 Important property Header dominates all LoopBB

Loop Detection Example Entry BB1 BB2 BB3 BB4 BB5 BB6 Exit

Important Parts of a Loop Header, LoopBB Backedges, BackedgeBB Exitedges, ExitBB For each LoopBB, examine each outgoing edge If the edge is to a BB not in LoopBB, then its an exit Preheader (Preloop) New block before the header (falls through to header) Whenever you invoke the loop, preheader executed Whenever you iterate the loop, preheader NOT executed All edges entering header Backedges – no change All others, retarget to preheader Postheader (Postloop) - analogous

Find the Preheaders for each Loop Entry BB1 BB2 BB3 ?? BB4 BB5 BB6 Exit

Characteristics of a Loop Nesting (generally within a procedure scope) Inner loop – Loop with no loops contained within it Outer loop – Loop contained within no other loops Nesting depth depth(outer loop) = 1 depth = depth(parent or containing loop) + 1 Trip count (average trip count) How many times (on average) does the loop iterate for (I=0; I<100; I++)  trip count = 100 Ave trip count = weight(header) / weight(preheader)

Trip Count Calculation Example Entry BB1 20 BB2 Calculate the trip counts for all the loops in the graph 360 BB3 1000 2100 600 BB4 480 140 1100 BB5 360 1340 BB6 20 Exit

Reducible Flow Graphs A flow graph is reducible if and only if we can partition the edges into 2 disjoint groups often called forward and back edges with the following properties The forward edges form an acyclic graph in which every node can be reached from the Entry The back edges consist only of edges whose destinations dominate their sources More simply – Take a CFG, remove all the backedges (x y where y dominates x), you should have a connected, acyclic graph bb1 Non-reducible! bb2 bb3

Regions Region: A collection of operations that are treated as a single unit by the compiler Examples Basic block Procedure Body of a loop Properties Connected subgraph of operations Control flow is the key parameter that defines regions Hierarchically organized Problem Basic blocks are too small (3-5 operations) Hard to extract sufficient parallelism Procedure control flow too complex for many compiler xforms Plus only parts of a procedure are important (90/10 rule)

Regions (2) Want Solution Intermediate sized regions with simple control flow Bigger basic blocks would be ideal !! Separate important code from less important Optimize frequently executed code at the expense of the rest Solution Define new region types that consist of multiple BBs Profile information used in the identification Sequential control flow (sorta) Pretend the regions are basic blocks

Region Type 1 - Trace Trace - Linear collection of basic blocks that tend to execute in sequence “Likely control flow path” Acyclic (outer backedge ok) Side entrance – branch into the middle of a trace Side exit – branch out of the middle of a trace Compilation strategy Compile assuming path occurs 100% of the time Patch up side entrances and exits afterwards Motivated by scheduling (i.e., trace scheduling) 10 BB1 90 80 20 BB2 BB3 80 20 BB4 10 BB5 90 10 BB6 10

Linearizing a Trace 10 (entry count) BB1 20 (side exit) 80 BB2 BB3 exit count) 80 20 (side entrance) BB4 10 (side exit) BB5 90 10 (side entrance) BB6 10 (exit count)

Intelligent Trace Layout for Icache Performance BB1 Intraprocedural code placement Procedure positioning Procedure splitting trace1 BB2 trace 2 BB4 BB6 trace 3 BB3 BB5 The rest Procedure view Trace view