Presentation is loading. Please wait.

Presentation is loading. Please wait.

Adaptive Optimization in the Jalapeño JVM Matthew Arnold Stephen Fink David Grove Michael Hind Peter F. Sweeney Source: UIUC.

Similar presentations


Presentation on theme: "Adaptive Optimization in the Jalapeño JVM Matthew Arnold Stephen Fink David Grove Michael Hind Peter F. Sweeney Source: UIUC."— Presentation transcript:

1 Adaptive Optimization in the Jalapeño JVM Matthew Arnold Stephen Fink David Grove Michael Hind Peter F. Sweeney Source: CS598 @ UIUC

2 Talk overview Introduction: Background & Jalapeño JVM Adaptive Optimization System (AOS) Multi-level recompilation Miscellaneous issues Feedback-directed inlining Conclusion

3 Background Three waves of JVMs: – First: Compile method when first encountered; use fixed set of optimizations – Second: Determine hot methods dynamically and compile them with more advanced optimizations – Third: Feedback-directed optimizations Jalapeño JVM targets third wave, but current implementation is second wave

4 Jalapeño JVM Written in Java (core services precompiled to native code in boot image) Compiles at four levels: baseline, 0, 1, & 2 Compile-only strategy (no interpretation) Yield points for quasi-preemptive switching

5 Talk progress Introduction: Background & Jalapeño JVM Adaptive Optimization System (AOS) Multi-level recompilation Miscellaneous issues Feedback-directed inlining Conclusion

6 Adaptive Optimization System

7 AOS: Design “Distributed, asynchronous, object-oriented design” useful for managing lots of data, say authors Each successive pipeline (from raw data to compilation decisions) performs increasingly complex analysis on decreasing amounts of data

8 Talk progress Introduction: Background & Jalapeño JVM Adaptive Optimization System (AOS) Multi-level recompilation Other issues Feedback-directed inlining Conclusion

9 Multi-level recompilation

10 Multi-level recompilation: Sampling Sampling occurs on thread switch Thread switch triggered by clock interrupt Thread switch can occur only at yield points Yield points are method invocations and loop back edges Discussion: Is this approach biased?

11 Multi-level recompilation: Biased sampling Code with no method calls or back edges Short method Long method method call

12 Multi-level recompilation: Cost- benefit analysis Method m compiled at level i; estimate: – T i, expected time program will spend executing m if m not recompiled – C j, the cost of recompiling m at optimization level j, for i ≤ j ≤ N. – T j, expected time program will spend executing method m if m recompiled at level j. – If, for best j, C j + T j < T i, recompile m at level j.

13 Multi-level recompilation: Cost- benefit analysis (continued) Estimate T i : T i = T f * P m T f is the future running time of the program We estimate that the program will run for as long as it has run so far

14 Multi-level recompilation: Cost- benefit analysis (continued) P m is the percentage of T f spent in m P m estimated from sampling Sample frequencies decay over time. – Why is this a good idea? – Could it be a disadvantage in certain cases?

15 Multi-level recompilation: Cost- benefit analysis (continued) Statically-measured speedups S i and S i used to determine T j : T j = T i * S i / S j – Statically-measured speedups?! – Is there any way to do better?

16 Multi-level recompilation: Cost- benefit analysis (continued) C j (cost of recompilation) estimated using a linear model of speed for each optimization level: C j = a j * size(m), where a j = constant for level j Is it reasonable to assume a linear model? OK to use statically-determined a j ?

17 Multi-level recompilation: Results

18 Multi-level recompilation: Results (continued)

19 Multi-level recompilation: Discussion Adaptive multi-level compilation does better than JIT at any level in short term. But in the long run, performance is slightly worse than JIT compilation. The primary target is server applications, which tend to run for a long time.

20 Multi-level recompilation: Discussion (continued) So what’s so great about Jalapeño’s AOS? – Current AOS implementation gives good results for both short and long term – JIT compiler can’t do both cases well because optimization level is fixed. – The AOS can be extended to support feedback- directed optimizations such as fragment creation (i.e., Dynamo) determining if an optimization was effective

21 Talk progress Introduction: Background & Jalapeño JVM Adaptive Optimization System (AOS) Multi-level recompilation Miscellaneous issues Feedback-directed inlining Conclusion

22 Miscellaneous issues: Multiprocessing Authors say that if a processor is idle, recompilation can be done almost for free. – Why almost for free? – Are there situations when you could get free recompilation on a uniprocessor?

23 Miscellaneous issues: Models vs. heuristics Authors moving toward “analytic model of program behavior” and elimination of ad-hoc tuning parameters. Tuning parameters proved difficult because of “unforeseen differences in application behavior.” Is it believable that ad-hoc parameters can be eliminated and replaced with models?

24 Miscellaneous issues: More intrusive optimizations The future of Jalapeño is more intrusive optimizations, such as compiler-inserted instrumentation for profiling Advantages and disadvantages compared with current system? – Advantages: Performance gains in the long term Adjusts to phased behavior – Disadvantages: Unlike with sampling, you can’t profile all the time Harder to adaptively throttle overhead

25 Miscellaneous: Stack frame rewriting In the future, Jalapeño will support rewriting of a baseline stack frame with an optimized stack frame Authors say that rewriting an optimized stack frame with an optimized stack frame is more difficult? – Why?

26 Talk progress Introduction: Background & Jalapeño JVM Adaptive Optimization System (AOS) Multi-level recompilation Miscellaneous issues Feedback-directed inlining Conclusion

27 Feedback-directed inlining

28 Feedback-directed inlining: More cost-benefit analysis Boost factor estimated: – Boost factor b is a function of 1. The fraction f of dynamic calls attributed to the call edge in the sampling-approximated call graph 2. Estimate s of the benefit (i.e., speedup) from eliminating virtually all calls from the program – Presumably something like b = f * s.

29 Feedback-directed inlining: Results Why?

30 Talk progress Introduction: Background & Jalapeño JVM Adaptive Optimization System (AOS) Multi-level recompilation Other issues Feedback-directed inlining Conclusion

31 AOS designed to support feedback-directed optimizations (third wave) Current AOS implementation only supports selective optimizations (second wave) – Improves short-term performance without hurting long term – Uses mix of cost-benefit model and ad-hoc methods. Future work will use more intrusive performance monitoring (e.g., instrumentation for path profiling, checking that an optimization improved performance)


Download ppt "Adaptive Optimization in the Jalapeño JVM Matthew Arnold Stephen Fink David Grove Michael Hind Peter F. Sweeney Source: UIUC."

Similar presentations


Ads by Google