CSc 453 Interpreters & Interpretation

Slides:



Advertisements
Similar presentations
Chapter 16 Java Virtual Machine. To compile a java program in Simple.java, enter javac Simple.java javac outputs Simple.class, a file that contains bytecode.
Advertisements

Virtual Machines Matthew Dwyer 324E Nichols Hall
Instruction Set Design
1 Lecture 10 Intermediate Representations. 2 front end »produces an intermediate representation (IR) for the program. optimizer »transforms the code in.
Chapter 8 ICS 412. Code Generation Final phase of a compiler construction. It generates executable code for a target machine. A compiler may instead generate.
1 Compiler Construction Intermediate Code Generation.
1 1 Lecture 14 Java Virtual Machine Instructors: Fu-Chiung Cheng ( 鄭福炯 ) Associate Professor Computer Science & Engineering Tatung Institute of Technology.
Intermediate code generation. Code Generation Create linear representation of program Result can be machine code, assembly code, code for an abstract.
1 Intermediate representation Goals: –encode knowledge about the program –facilitate analysis –facilitate retargeting –facilitate optimization scanning.
Java for High Performance Computing Jordi Garcia Almiñana 14 de Octubre de 1998 de la era post-internet.
JVM-1 Introduction to Java Virtual Machine. JVM-2 Outline Java Language, Java Virtual Machine and Java Platform Organization of Java Virtual Machine Garbage.
Consider With x = 10 we may proceed as (10-1) = 9 (10-7) = 3 (9*3) = 27 (10-11) = -1 27/(-1) = -27 Writing intermediates on paper.
Chapter 16 Java Virtual Machine. To compile a java program in Simple.java, enter javac Simple.java javac outputs Simple.class, a file that contains bytecode.
1 Software Testing and Quality Assurance Lecture 31 – SWE 205 Course Objective: Basics of Programming Languages & Software Construction Techniques.
CSc 453 Interpreters & Interpretation Saumya Debray The University of Arizona Tucson.
Intro to Java The Java Virtual Machine. What is the JVM  a software emulation of a hypothetical computing machine that runs Java bytecodes (Java compiler.
Compiler Construction Lecture 17 Mapping Variables to Memory.
Fast, Effective Code Generation in a Just-In-Time Java Compiler Rejin P. James & Roshan C. Subudhi CSE Department USC, Columbia.
Java Bytecode What is a.class file anyway? Dan Fleck George Mason University Fall 2007.
Lecture 10 : Introduction to Java Virtual Machine
CSC 310 – Imperative Programming Languages, Spring, 2009 Virtual Machines and Threaded Intermediate Code (instead of PR Chapter 5 on Target Machine Architecture)
O VERVIEW OF THE IBM J AVA J UST - IN -T IME C OMPILER Presenters: Zhenhua Liu, Sanjeev Singh 1.
1 Introduction to Java & Java Virtual Machine 5/12/2001 Hidehiko Masuhara.
1 Introduction to JVM Based on material produced by Bill Venners.
CS412/413 Introduction to Compilers and Translators May 3, 1999 Lecture 34: Compiler-like Systems JIT bytecode interpreter src-to-src translator bytecode.
Copyright © 2006 Addison-Wesley. All rights reserved.1-1 ICS 410: Programming Languages.
Java Virtual Machine Case Study on the Design of JikesRVM.
CSc 453 Final Code Generation Saumya Debray The University of Arizona Tucson.
Virtual Machines, Interpretation Techniques, and Just-In-Time Compilers Kostis Sagonas
Operating Systems Lecture 14 Segments Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of Software Engineering.
Programming Languages
1 Compiler Construction (CS-636) Muhammad Bilal Bashir UIIT, Rawalpindi.
More on MIPS programs n SPIM does not support everything supported by a general MIPS assembler. For example, –.end doesn’t work Use j $ra –.macro doesn’t.
Intermediate code generation. Code Generation Create linear representation of program Result can be machine code, assembly code, code for an abstract.
CS 598 Scripting Languages Design and Implementation 12. Interpreter implementation.
RealTimeSystems Lab Jong-Koo, Lim
Code Generation Part I Chapter 8 (1st ed. Ch.9)
Smalltalk Implementation Harry Porter, October 2009 Smalltalk Implementation: Optimization Techniques Prof. Harry Porter Portland State University 1.
INTERMEDIATE LANGUAGES SUNG-DONG KIM DEPT. OF COMPUTER ENGINEERING, HANSUNG UNIVERSITY.
Compiler Chapter 9. Intermediate Languages Sung-Dong Kim Dept. of Computer Engineering, Hansung University.
Computer Architecture & Operations I
Computer Architecture Instruction Set Architecture
The Java Virtual Machine (JVM)
A Closer Look at Instruction Set Architectures
Compilers Principles, Techniques, & Tools Taught by Jing Zhang
CS216: Program and Data Representation
Compiler Chapter 9. Intermediate Languages
A Closer Look at Instruction Set Architectures
The University of Adelaide, School of Computer Science
CS 11 C track: lecture 8 Last week: hash tables, C preprocessor
Code Generation Part I Chapter 9
Java Byte Codes (0xCAFEBABE) cs205: engineering software
Code Generation.
Intermediate Representations
CSc 453 Interpreters & Interpretation
Lesson Objectives Aims Key Words Compiler, interpreter, assembler
Lecture 30 (based on slides by R. Bodik)
Adaptive Code Unloading for Resource-Constrained JVMs
Code Generation Part I Chapter 9
Intermediate Representations
Instruction Set Architectures Continued
Virtual Memory Hardware
Compiler Design 21. Intermediate Code Generation
Course Overview PART I: overview material PART II: inside a compiler
Lecture 4: Instruction Set Design/Pipelining
Compiler Design 21. Intermediate Code Generation
CSc 453 Final Code Generation
Procedure Linkages Standard procedure linkage Procedure has
Just In Time Compilation
Code Optimization.
Presentation transcript:

CSc 453 Interpreters & Interpretation Saumya Debray The University of Arizona Tucson

CSc 453: Interpreters & Interpretation An interpreter is a program that executes another program. An interpreter implements a virtual machine, which may be different from the underlying hardware platform. Examples: Java Virtual Machine; VMs for high-level languages such as Scheme, Prolog, Icon, Smalltalk, Perl, Tcl. The virtual machine is often at a higher level of semantic abstraction than the native hardware. This can help portability, but affects performance. CSc 453: Interpreters & Interpretation

Interpretation vs. Compilation CSc 453: Interpreters & Interpretation

Interpreter Operation ip = start of program; while (  done ) { op = current operation at ip; execute code for op on current operands; advance ip to next operation; } CSc 453: Interpreters & Interpretation

Interpreter Design Issues Encoding the operation I.e., getting from the opcode to the code for that operation (“dispatch”): byte code (e.g., JVM) indirect threaded code direct threaded code. Representing operands register machines: operations are performed on a fixed finite set of global locations (“registers”) (e.g.: SPIM); stack machines: operations are performed on the top of a stack of operands (e.g.: JVM). CSc 453: Interpreters & Interpretation

CSc 453: Interpreters & Interpretation Byte Code Operations encoded as small integers (~1 byte). The interpreter uses the opcode to index into a table of code addresses: while ( TRUE ) { byte op = ip; switch ( op ) { case ADD: … perform addition; break; case SUB: … perform subtraction; break; … } Advantages: simple, portable. Disadvantages: inefficient. CSc 453: Interpreters & Interpretation

CSc 453: Interpreters & Interpretation Direct Threaded Code Indexing through a jump table (as with byte code) is expensive. Idea: Use the address of the code for an operation as the opcode for that operation. The opcode may no longer fit in a byte. while ( TRUE ) { addr op = *ip; goto *op; /* jump to code for the operation */ } More efficient, but the interpreted code is less portable. [James R. Bell. Threaded Code. Communications of the ACM, vol. 16 no. 6, June 1973, pp. 370–372] CSc 453: Interpreters & Interpretation

Byte Code vs. Threaded Code Control flow behavior: [based on: James R. Bell. Threaded Code. Communications of the ACM, vol. 16 no. 6, June 1973, pp. 370–372] byte code: threaded code: CSc 453: Interpreters & Interpretation

Indirect Threaded Code Intermediate in portability, efficiency between byte code and direct threaded code. Each opcode is the address of a word that contains the address of the code for the operation. Avoids some of the costs associated with translating a jump table index into a code address. while ( TRUE ) { addr op = *ip; goto **op; /* jump to code for the operation */ } [R. B. K. Dewar. Indirect Threaded Code. Communications of the ACM, vol. 18 no. 6, June 1975, pp. 330–331] CSc 453: Interpreters & Interpretation

CSc 453: Interpreters & Interpretation Example program operations memory address operation add 10000 add: … mul 11000 sub: … sub 12000 mul: … 13000 div: … … direct threaded 10000 12000 11000 byte code Op Table 17 index contents 23 10000 18 11000 12000 27 13000 indirect threaded Op Table 6000 addr contents 6008 10000 6004 11000 12000 6012 13000 CSc 453: Interpreters & Interpretation

Handling Operands 1: Stack Machines Used by Pascal interpreters (’70s and ’80s); resurrected in the Java Virtual Machine. Basic idea: operands and results are maintained on a stack; operations pop operands off this stack, push the result back on the stack. Advantages: simplicity compact code. Disadvantages: code optimization (e.g., utilizing hardware registers effectively) not easy. CSc 453: Interpreters & Interpretation

CSc 453: Interpreters & Interpretation Stack Machine Code The code for an operation ‘op x1, …, xn’ is: push xn … push x1 op Example: JVM code for ‘x = 2y – 1’: iconst 1 /* push the integer constant 1 */ iload y /* push the value of the integer variable y */ iconst 2 imul /* after this, stack contains: (2*y), 1 */ isub istore x /* pop stack top, store to integer variable x */ CSc 453: Interpreters & Interpretation

Generating Stack Machine Code Essentially just a post-order traversal of the syntax tree: void gencode(struct syntaxTreeNode tnode ) { if ( IsLeaf( tnode ) ) { … } else { n = tnoden_operands; for (i = n; i > 0; i--) { gencode( tnodeoperand[i] ); /* traverse children first */ } /* for */ gen_instr( opcode_table[tnodeop] ); /* then generate code for the node */ } /* if [else] */ } CSc 453: Interpreters & Interpretation

Handling Operands 2: Register Machines Basic idea: Have a fixed set of “virtual machine registers;” Some of these get mapped to hardware registers. Advantages: potentially better optimization opportunities. Disadvantages: code is less compact; interpreter becomes more complex (e.g., to decode VM register names). CSc 453: Interpreters & Interpretation

Just-in-Time Compilers (JITs) Basic idea: compile byte code to native code during execution. Advantages: original (interpreted) program remains portable, compact; the executing program runs faster. Disadvantages: more complex runtime system; performance may degrade if runtime compilation cost exceeds savings. CSc 453: Interpreters & Interpretation

Improving JIT Effectiveness Reducing Costs: incur compilation cost only when justifiable; invoke JIT compiler on a per-method basis, at the point when a method is invoked. Improving benefits: some systems monitor the executing code; methods that are executed repeatedly get optimized further. CSc 453: Interpreters & Interpretation

Method Dispatch: vtables vtables (“virtual tables”) are a common implementation mechanism for virtual methods in OO languages. The implementation of each class contains a vtable for its methods: each virtual method f in the class has an entry in the vtable that gives f ’s address. each instance of an object gets a pointer to the corresponding vtable. to invoke a (virtual) method, get its address from the vtable. CSc 453: Interpreters & Interpretation

CSc 453: Interpreters & Interpretation VMs with JITs Each method has vtable entries for: byte code address; native code address. Initially, the native code address field points to the JIT compiler. When a method is first invoked, this automatically calls the JIT compiler. The JIT compiler: generates native code from the byte code for the method; patches the native code address of the method to point to the newly generated native code (so subsequent calls go directly to native code); jumps to the native code. CSc 453: Interpreters & Interpretation

JITs: Deciding what to Compile For a JIT to improve performance, the benefit of compiling to native code must offset the cost of doing so. E.g., JIT compiling infrequently called straight-line code methods can lead to a slowdown! We want to JIT-compile only those methods that contain frequently executed (“hot”) code: methods that are called a large number of times; or methods containing loops with large iteration counts. CSc 453: Interpreters & Interpretation

JITs: Deciding what to Compile Identifying frequently called methods: count the number of times each method is invoked; if the count exceeds a threshold, invoke JIT compiler. (In practice, start the count at the threshold value and count down to 0: this is slightly more efficient.) Identifying hot loops: modify the interpreter to “snoop” the loop iteration count when it finds a loop, using simple bytecode pattern matching. use the iteration count to adjust the amount by which the invocation count for the method is decremented on each call. CSc 453: Interpreters & Interpretation

Typical JIT Optimizations Choose optimizations that are cheap to perform and likely to improve performance, e.g.: inlining frequently called methods: consider both code size and call frequency in inlining decision. exception check elimination: eliminate redundant null pointer checks, array bounds checks. common subexpression elimination: avoid address recomputation, reduce the overhead of array and instance variable access. simple, fast register allocation. CSc 453: Interpreters & Interpretation