Presentation is loading. Please wait.

Presentation is loading. Please wait.

APARAPI Java™ platform’s ‘Write Once Run Anywhere’ ® now includes the GPU Gary Frost AMD PMTS Java Runtime Team.

Similar presentations


Presentation on theme: "APARAPI Java™ platform’s ‘Write Once Run Anywhere’ ® now includes the GPU Gary Frost AMD PMTS Java Runtime Team."— Presentation transcript:

1

2 APARAPI Java™ platform’s ‘Write Once Run Anywhere’ ® now includes the GPU Gary Frost AMD PMTS Java Runtime Team

3 3| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 AGENDA  The age of heterogeneous computing is here  The supercomputer in your desktop/laptop  Why Java ™ ?  Current GPU programming options for Java developers  Are developers likely to adopt emerging Java OpenCL ™ /CUDA ™ bindings?  Aparapi –What is it –How it works  Performance  Examples/Demos  Proposed Enhancements  Future work

4 4| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 THE AGE OF HETEROGENEOUS COMPUTE IS HERE  GPUs originally developed to accelerate graphics operations  Early adopters repurposed their GPUs for ‘general compute’ by performing ‘unnatural acts’ with shader APIs  OpenGL allowed shaders/textures to be compiled and executed via extensions  OpenCL TM /GLSL/CUDA TM standardized/formalized how to express GPU compute and simplified host programming  New programming models are emerging and lowering barriers to adoption

5 5| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 THE SUPERCOMPUTER IN YOUR DESKTOP  Some interesting tidbits from –November 2000  “ASCI White is new #1 with 4.9 TFlops on the Linpack"  –November 2002  “3.2 TFlops are needed to enter the top 10”   May 2011 –AMD Radeon TM TFlops single precision performance 

6 6| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011  One of the most widely used programming languages –http://www.tiobe.com/index.php/content/paperinfo/tpci/index.htmlhttp://www.tiobe.com/index.php/content/paperinfo/tpci/index.html  Established in domains likely to benefit from heterogeneous compute –BigData, Search, Hadoop+Pig, Finance, GIS, Oil & Gas  Even if applications are not implemented in Java, they may still run on the Java Virtual Machine (JVM) –JRuby, JPython, Scala, Clojure, Quercus(PHP)  Acts as a good proxy/indicator for enablement of other runtimes/interpreters –JavaScript, Flash,.NET, PHP, Python, Ruby, Dalvik? WHY JAVA?

7 7| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 GPU PROGRAMMING OPTIONS FOR JAVA PROGRAMMERS  Emerging Java GPU APIs require coding a ‘Kernel’ in a domain-specific language // JOCL/OpenCL kernel code __kernel void squares(__global const float *in, __global float *out){ int gid = get_global_id(0); out[gid] = in[gid] * in[gid]; }  As well as writing the Java ‘host’ CPU-based code to: –Initialize the data –Select/Initialize execution device –Allocate or define memory buffers for args/parameters –Compile 'Kernel' for a selected device –Enqueue/Send arg buffers to device –Execute the kernel –Read results buffers back from the device –Cleanup (remove buffers/queues/device handles) –Use the results import static org.jocl.CL.*; import org.jocl.*; public class Sample { public static void main(String args[]) { // Create input- and output data int size = 10; float inArr[] = new float[size]; float outArray[] = new float[size]; for (int i=0; i

8 8| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 ARE DEVELOPERS LIKELY TO ADOPT EMERGING JAVA OPENCL/CUDA BINDINGS?  Some will –Early adopters –Prepared to learn new languages –Motivated to squeeze all the performance they can from available compute devices –Prepared to implement algorithms both in Java and in CUDA/OpenCL  Many won’t –OpenCL/CUDA C99 heritage likely to disenfranchise Java developers  Many walked away from C/C++ or possibly never encountered it at all (due to CS education shifts)  Difficulties exposing low level concepts (such as GPU memory model) to developers who have ‘moved on’ and just expect the JVM to ‘do the right thing’  Who pays for retraining of Java developers? –Notion of writing code twice (once for Java execution another for GPU/APU) alien  Where’s my ‘Write Once, Run Anywhere’?

9 9| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 WHAT IS APARAPI?  An API for expressing data parallel workloads in Java –Developer extends a Kernel base class –Compiles to Java bytecode using existing tool chain –Uses existing/familiar Java tool chain to debug the logic of their Kernel implementations  A runtime component capable of either : –Executing Kernel via a Java Thread Pool –Converting Kernel bytecode to OpenCL and executing on GPU

10 10| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 AN EMBARRASSINGLY PARALLEL USE CASE  First lets revisit our earlier code example – Calculate square[0..size] for a given input in[0..size] final int[] square= new int[size]; final int[] in = new int[size]; // populating in[0..size] omitted for (int i=0; i

11 11| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 REFACTORING OUR EXAMPLE TO USE APARAPI final int[] square= new int[size]; final int[] in = new int[size]; // populating in[0..size] omitted for (int i=0; i

12 12| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 EXPRESSING DATA PARALLEL IN APARAPI  What happens when we call execute(n)? Kernel kernel = new public void run(){ int i=getGlobalID(); square[i]=int[i]*int[i]; } }; kernel.execute(size);

13 13| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 FIRST CALL OF KERNEL.EXECUTE(SIZE) WHEN OPENCL/GPU IS AVAILABLE  Reload classfile via classloader and locate all methods and fields  For ‘run()’ method and all methods reachable from ‘run()’ –Convert method bytecode to an IR  Expression trees  Conditional sequences analyzed and converted to if{}, if{}else{} and for{} constructs –Create a list of fields accessed by the bytecode  Note the access type (read/write/read+write)  Accessed fields will be turned into args and passed to generated OpenCL  Create an OpenCL buffer for each accessed primitive array (read, write or readwrite) –Create and Compile OpenCL  Bail and revert to Java Thread Pool if we encounter any issues in previous steps

14 14| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 ALL CALLS OF KERNEL.EXECUTE(SIZE) WHEN OPENCL/GPU IS AVAILABLE  Lock any accessed primitive arrays (so the garbage collector doesn’t move or collect them)  For each field readable by the kernel: –If field is an array → enqueue a buffer write –If field is scalar → set kernel arg value  Enqueue Kernel execution  For each array writeable by the kernel: –Enqueue a buffer read  Wait for all enqueued requests to complete  Unlock accessed primitive arrays

15 15| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 KERNEL.EXECUTE(SIZE) WHEN OPENCL/GPU IS NOT AN OPTION  Create a thread pool  One thread per core  Clone Kernel once for each thread  Each Kernel accessed exclusively from a single thread  Each Kernel loops globalSize/threadCount times  Update globalId, localId, groupSize, globalSize on Kernel instance  Executes run() method on Kernel instance  Wait for all threads to complete

16 16| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 ADOPTION CHALLENGES (APARAPI VS EMERGING JAVA GPU BINDINGS) Emerging GPU bindings Aparapi Learn OpenCL/CUDADIFFICULTN/A Locate potential data parallel opportunitiesMEDIUM Refactor existing code/data structuresMEDIUM Create Kernel CodeDIFFICULTEASY Create code to coordinate execution and buffer transfersMEDIUMN/A Identify GPU performance bottlenecksDIFFICULT Iterate code/debug algorithm logicDIFFICULTMEDIUM Solve build/deployment issuesDIFFICULTMEDIUM

17 17| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 MANDELBROT EXAMPLE new public void run() { int gid = getGlobalId(); float x = (((gid % w)-(w/2))/(float)w); // x { } float y = (((gid / w)-(h/2))/(float)h); // y { } float zx = x, zy = y, new_zx = 0f; int count = 0; while (count < maxIterations && zx * zx + zy * zy < 8) { new_zx = zx * zx - zy * zy + x; zy = 2 * zx * zy + y; zx = new_zx; count++; } rgb[gid] = pallette[count]; }).execute(width*height);

18 18| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 EXPRESSING DATA PARALLEL IN JAVA WITH APARAPI BY EXTENDING KERNEL class SquareKernel extends Kernel{ final int[] in, square; public SquareKernel(final int[] in){ this.in = in; this.square = new int[in.length); public void run(){ int i=getGlobalID(); square[i]=int[i]*int[i]; } public int[] square(){ execute(in.length); return(square); } int []in = new int[size]; SquareKernel squareKernel = new SquareKernel(in); // populating in[0..size] omitted int[] result = squareKernel.square(); square() method ‘wraps’ the execution mechanics Provides a more natural Java API

19 19| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 EXPRESSING DATA PARALLELISM IN APARAPI USING PROPOSED JAVA 8 LAMBDAS  JSR 335 ‘Project Lambda’ proposes addition of ‘lambda’ expressions to Java 8.  How we expect Aparapi will make use of the proposed Java 8 extensions final int [] square = new int[size]; final int [] in = new int[size]; // populating in[0..size] omitted Kernel.execute(size, #{ i -> out[i]=int[i]*int[i]; });

20 20| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011  At runtime Aparapi converts Java bytecode to OpenCL  OpenCL compiler converts OpenCL to device specific ISA (for GPU/APU)  GPU comprised of multiple SIMD (Single Instruction Multiple Dispatch) Cores  SIMD performance stems from executing the same instruction on different data at the same time –Think single program counter shared across multiple threads – All SIMDs executing at the same time (in lock-step) new public void run(){ int i = getGlobalID(); int temp= in[i]*2; temp = temp+1; out[i] = temp; } }.execute(4) HOW APAPAPI EXECUTES ON THE GPU i=0i=1i=2i=3 int temp =in[0]*2int temp =in[1]*2int temp =in[2]*2int temp =in[3]*2 temp=temp+1 out[0]=tempout[1]=tempout[2]=tempout[3]=temp

21 21| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 DEVELOPER IS RESPONSIBLE FOR ENSURING PROBLEM IS DATA PARALLEL  Data dependencies may violate the ‘in any order’ contract for (int i=1; i< 100; i++){ out[i] = out[i-1]+in[i]; } out[i-1] refers to a value resulting from a previous iteration which may not have been evaluated yet  Loops mutating shared data will need to be refactored or will necessitate atomic operations for (int i=0; i< 100; i++){ sum += in[i]; } sum += x causes a race condition Almost certainly will not be atomic when translated to OpenCL Not safe in multi-threaded Java either new public void run(){ int i = getGlobalID(); out[i] = out[i-1]+in[i]; }}.execute(100); new public void run(){ int i = getGlobalID(); sum+= in[i]; }}.execute(100);

22 22| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 SOMETIMES WE CAN REFACTOR TO RECOVER SOME PARALLELISM for (int i=0; i< 100; i++){ sum += in[i]; } new public void run(){ int i = getGlobalID(); sum+= in[i]; } }.execute(100); new public void run(){ int n = getGlobalID() for (int i=0; i<10; i++) partial[n] += data[n*10+i]; } }.execute(10); for (int i=0; i< 10; i++){ sum+=partial[i]; } for (int n=0; n<10; n++){ for (int i=0; i<10; i++){ partial[n] += data[n*10+i]; } for (int i=0; i< 10; i++){ sum+=partial[i]; }

23 23| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011  SIMD performance impacted when code contains branches –To stay in lockstep SIMDs must process both the 'then' and 'else' blocks – Use result of 'condition' to predicate instructions (conditionally mask to a no-op) new public void run(){ int i = getGlobalID(); int temp= in[i]*2; if (i%2==0) temp = temp+1; else temp = temp -1; out[i] = temp; } }.execute(4) TRY TO AVOID BRANCHING WHEREVER POSSIBLE i=0i=1i=2i=3 int temp =in[0]*2int temp =in[1]*2int temp =in[2]*2int temp =in[3]*2 = (0%2==0) = (1%2==0) = (2%2==0) = (3%2==0) if temp=temp+1 if temp=temp-1 out[0]=tempout[1]=tempout[2]=tempout[3]=temp

24 24| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 CHARACTERISTICS OF IDEAL DATA PARALLEL WORKLOADS  Code which iterates over large arrays of primitives –32/64 bit data types preferred –Where the order of iteration is not critical  Avoid data dependencies between iterations –Each iteration contains sequential code (few branches)  Good balance between data size (low) and compute (high) –Transfer of data to/from the GPU can be costly  Although APUs likely to mitigate this over time –Trivial compute often not worth the transfer cost –May still benefit by freeing up CPU for other work Compute Data Size GPU Memory Ideal

25 25| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 APARAPI NBODY EXAMPLE  NBody is a common OpenCL/CUDA benchmark/demo –For each particle/body  Calculate new position based on the gravitational force impinged on each body, by every other body  Essentially a N^2 space problem –If we double the number of bodies, we perform four times the positional calculations  Following charts compare –Naïve Java version (single loop) –Aparapi version using Java Thread Pool –Aparapi version running on the GPU (ATI Radeon ™ 5870)

26 26| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 APARAPI NBODY PERFORMANCE (FRAMES RATE VS NUMBER OF BODIES) Frames per second

27 27| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 NBODY PERFORMANCE: CALCULATIONS PER µSEC VS. NUMBER OF BODIES Position calculations per µS Number of bodies/particles

28 28| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 APARAPI EXPLICIT BUFFER MANAGEMENT  This code demonstrates a fairly common pattern. Namely a Kernel executed inside a loop int [] buffer = new int[HUGE]; int [] unusedBuffer = new int[HUGE]; Kernel k = new public void run(){ // mutates buffer contents // no reference to unusedBuffer } }; for (int i=0; i< 1000; i++){ k.execute(HUGE); } Although Aparapi analyzes kernel methods to optimize host buffer transfer requests, it has no knowledge of buffer accesses from the enclosing loop. Aparapi must assume that the buffer is modified between invocations. This forces (possibly unnecessary) buffer copies to and from the device for each invocation of Kernel.excute(int) //Transfer buffer to GPU //Transfer buffer from GPU

29 29| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 APARAPI EXPLICIT BUFFER MANAGEMENT  Using the new explicit buffer management APIs int [] buffer = new int[HUGE]; Kernel k = new public void run(){ // mutates buffer contents } }; k.setExplicit(); k.put(buffer); for (int i=0; i< 1000; i++){ k.execute(HUGE); } k.get(buffer);  Developer takes control (of all buffer transfers) by marking the kernel as explicit  Then coordinates when/if transfers take place  Here we save 999 buffer writes and 999 buffer reads

30 30| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 APARAPI EXPLICIT BUFFER MANAGEMENT  A possible alternative might be to expose the ‘host’ code to Aparapi int [] buffer = new int[HUGE]; Kernel k = new public void run(){ // mutates buffer contents public void host(){ for (int i=0; i< 1000; i++){ execute(HUGE); } } }; k.host();  Developer exposes the host code to Aparapi by overriding the host() method.  By analyzing the bytecode of host(), Aparapi can determine when/if buffers are mutated and can ‘inject’ appropriate put()/get() requests behind the scenes.

31 31| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 APARAPI BITONIC SORT WITH EXPLICIT BUFFER MANAGEMENT  Bitonic mergesort is a parallel friendly ‘in place’ sorting algorithm –http://en.wikipedia.org/wiki/Bitonic_sorterhttp://en.wikipedia.org/wiki/Bitonic_sorter  On 10/18/2010 the following post appeared on Aparapi forums “Aparapi 140x slower than single thread Java?! what am I doing wrong?” –Source code (for Bitonic Sort) was included in the post  An Aparapi Kernel (for each sort pass) executed inside a Java loop.  Aparapi was forcing unnecessary buffer copies.  Following chart compares : –Single threaded Java version –Aparapi/GPU version without explicit buffer management (default AUTO mode) –Aparapi/GPU version with recent explicit buffer management feature enabled.  Both Aparapi versions running on ATI Radeon ™ 5870.

32 32| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 EXPLICIT BUFFER MANAGEMENT EFFECT ON BITONIC SORT IMPLEMENTATION Time (ms)

33 33| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 PROPOSED APARAPI ENHANCEMENT: ALLOW ACCESS TO ARRAYS OF OBJECTS  A Java developer implementing an 'nbody' solution would probably define a class for each particle public class Particle{ private int x, y, z; private String name; private Color color; //... }  … would make all fields private and limit access via setters/getters public void setX(int x){ this.x = x}; public int getX(){return this.x); // same for y,z, name etc  … and expect to create a Kernel to update positions for an array of such particles Particle[] particles = new Particle[1024]; ParticleKernel kernel = new ParticleKernel(particles); while(displaying){ kernel.execute(particles.length); updateDisplayPositions(particles); }

34 34| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 PROPOSED APARAPI ENHANCEMENT: ALLOW ACCESS TO ARRAYS OF OBJECTS  Unfortunately the current ‘alpha’ version of Aparapi would fail to convert this kernel to OpenCL  Would fall back to using a Thread Pool.  Aparapi currently requires that the previous code to be refactored so that data is held in primitive arrays int[] x = new int[1024]; int[] y = new int[1024]; int[] z = new int[1024]; Color[] color = new Color[1024]; String[] name = new String[1024]; Positioner.position(x, y, z);  This is clearly a potential ‘barrier to adoption’

35 35| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 PROPOSED APARAPI ENHANCEMENT: ALLOW ACCESS TO ARRAYS OF OBJECTS  Proposed enhancement will allow Aparapi Kernels to access arrays (or array based collections) of objects  At runtime Aparapi: –Tracks all fields accessed via objects reachable from Kernel.run() –Extracts the data from these fields into a parallel-array form –Executes a Kernel using the parallel-array form –Returns the data back into the original object array  These extra steps do impact performance (compared with refactored data parallel form) –However, we can still demonstrate performance gains over non-Aparapi versions

36 36| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 FUTURE WORK  Sync with ‘project lambda’ (Java 8) and allow kernels to be represented as lambda expressions  Continue to investigate automatic extraction of buffer transfers from object collections  Hand more explicit control to ‘power users’ –Explicit buffer (or even sub buffer) transfers –Expose local memory and barriers  Open Source –Aiming for Q3 Open Source release of Aparapi –License TBD, probably BSD variant –Still reviewing hosting options –Encourage community contributions

37 37| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 SIMILAR INTERESTING/RELATED WORK  Tidepowerd –Offers a similar solution for.NET –NVIDIA cards only at present   java-gpu –An open source project for extracting kernels from nested loops –Extracts code structure from bytecode –Creates CUDA behind the scenes   GRAPHITE-OpenCL –Auto detect data parallel loops in gcc compiler and generate OpenCL + host code for those loops 

38 38| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 SUMMARY  APUs/GPUs offer unprecedented performance for the appropriate workload  Don’t assume everything can/should execute on the APU/GPU  Profile your Java code to uncover potential parallel opportunities  Aparapi provides an ideal framework for executing data-parallel code on the GPU  Find out more about Aparapi at  Participate in the upcoming Aparapi Open Source community

39 QUESTIONS

40 40| APARAPI : Java™ platform’s ‘Write Once Run Anywhere’® now includes the GPU | June 2011 Disclaimer & Attribution The information presented in this document is for informational purposes only and may contain technical inaccuracies, omissions and typographical errors. The information contained herein is subject to change and may be rendered inaccurate for many reasons, including but not limited to product and roadmap changes, component and motherboard version changes, new model and/or product releases, product differences between differing manufacturers, software changes, BIOS flashes, firmware upgrades, or the like. There is no obligation to update or otherwise correct or revise this information. However, we reserve the right to revise this information and to make changes from time to time to the content hereof without obligation to notify any person of such revisions or changes. NO REPRESENTATIONS OR WARRANTIES ARE MADE WITH RESPECT TO THE CONTENTS HEREOF AND NO RESPONSIBILITY IS ASSUMED FOR ANY INACCURACIES, ERRORS OR OMISSIONS THAT MAY APPEAR IN THIS INFORMATION. ALL IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE ARE EXPRESSLY DISCLAIMED. IN NO EVENT WILL ANY LIABILITY TO ANY PERSON BE INCURRED FOR ANY DIRECT, INDIRECT, SPECIAL OR OTHER CONSEQUENTIAL DAMAGES ARISING FROM THE USE OF ANY INFORMATION CONTAINED HEREIN, EVEN IF EXPRESSLY ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AMD, AMD Radeon, the AMD arrow logo, and combinations thereof are trademarks of Advanced Micro Devices, Inc. All other names used in this presentation are for informational purposes only and may be trademarks of their respective owners. OpenCL is a trademark of Apple Inc used under license to the Khronos Group, Inc. NVIDIA, the NVIDIA logo, and CUDA are trademarks or registered trademarks of NVIDIA Corporation. Java, JVM, JDK and “Write Once, Run Anywhere" are trademark s of Oracle and/or its affiliates. © 2011 Advanced Micro Devices, Inc. All rights reserved.


Download ppt "APARAPI Java™ platform’s ‘Write Once Run Anywhere’ ® now includes the GPU Gary Frost AMD PMTS Java Runtime Team."

Similar presentations


Ads by Google