Talk

Java on CRaC: Superfast JVM Application Startup
Conference (INTERMEDIATE level)
Room 5
Score 0.14
Score 0.14
Score 0.15
Score 0.16
The match becomes increasingly accurate as the similarity score approaches zero.

One of the key benefits of a microservice architecture is the ability to dynamically respond to changes in load by spinning up new instances as required. However, when deploying JVM-based services, the warmup time of JIT compilation can be a limitation on the effectiveness of this technique.

One approach to solving this problem is using native images, where the service is statically compiled using an ahead-of-time (AOT) compiler. Although this does reduce startup time and footprint, it does so at the cost of overall performance.

A new project has been proposed and accepted into OpenJDK called CRaC (Coordinated Restore at Checkpoint). The goal is to research the co-ordination of Java programs with mechanisms to checkpoint (snapshot) a running application. Restoring from the snapshot could solve some of the problems with the startup and warmup times, especially in microservices.

In this session, we’ll look at the main challenges for such a project, potential solutions and the results from a proof-of-concept implementation.

Simon Ritter
Azul

Simon Ritter is the Deputy CTO of Azul Systems. Simon has been in the IT business since 1984 and holds a Bachelor of Science degree in Physics from Brunel University in the U.K. 

Simon joined Sun Microsystems in 1996 and started working with Java technology from JDK 1.0; he has spent time working in both Java development and consultancy. Having moved to Oracle as part of the Sun acquisition, he managed the Java Evangelism team for the core Java platform. Now at Azul, he continues to help people understand Java as well as Azul’s JVM technologies and products. Simon has twice been awarded Java Rockstar status at JavaOne and is a Java Champion. He represents Azul on the Java SE Expert Group, JCP Executive Committee, OpenJDK Vulnerability Group and Adoptium Steering Committee.

Generated Summary
WARNING: This summary was generated using GPT based on the transcript, as a result spelling mistakes and more importantly hallucinations can be present.

Improving the Efficiency of Java Applications
Why is Java so Popular?
Java is one of the most popular programming languages, used for a wide variety of applications. This is due to its portability, scalability, safety and security, and performance. It relies on the Java Virtual Machine (JVM) to interpret bytecode and run native instructions for the platform it is running on.
Application Startup and Warm-up
When starting up a Java application, there is a lot more work involved than when starting an application written in a language like C or C++. The JVM has to load and initialize code, generate templates for every bytecode at runtime, load classes, unpack jar files, initialize classes, and run main code. Additionally, the warm-up phase involves analyzing methods and compiling frequently used code. JVM startup is usually very quick (milliseconds), while application startup depends on how many classes are loaded and the initialization code.
Just-in-Time Compilation
Java was initially slow compared to C and C++, so developers implemented Just-In-Time Compilation (JIT) to improve its performance. JIT works by monitoring which methods are called frequently, compiling them into native instructions with the C1 and C2 JIT compilers, and optimizing them for better performance. However, every time an application is run, it has to start in a slow interpretive mode and then warm up until it reaches its optimum performance level, which can take minutes or even hours.
Application Class Data Sharing
Application Class Data Sharing is a new feature recently added to the OpenJDK which allows for applications to start faster by writing out the internal data structures of the application to a file and then mapping them into memory when the application is restarted. This reduces the footprint of multiple JVMs and can be used to share the same area of memory in multiple applications, improving efficiency.
Ahead-of-Time Static Compilation
An alternative approach is Ahead-of-Time static compilation, where Java code is compiled into native instructions for a specific processor, avoiding the need for interpreting bytecodes and compilation during application run-time. Growl is a tool which can create native images for applications written in Java, generating a binary that can be used to improve start-up performance.
Profile-Guided Optimization
This presentation discusses how to improve the efficiency of code when executing on a particular platform. It introduces profile-guided optimization, which is a technique that has been around since compilers were created. It involves compiling the code into native instructions, running it and profiling it, and then feeding that profile back into the compiler to recompile the code.
Monomorphic Site Inlining and Speculative Optimization
Other techniques discussed include monomorphic site inlining, which is a basic optimization that eliminates the need to create a stack frame when calling a method. Speculative optimization is a method used by compilers to improve performance. It uses branch analysis to count how many times code goes through each branch of an if statement and optimize the code based on this data.
Ready Now
Ready Now is a way to bridge the gap between the two, resulting in faster startup and the same overall performance level as JIT compiled code.
Cryo and Crack
Cryo is a Linux feature that allows applications to be frozen at certain points and written to a set of files. Crack is a tool that allows applications to be aware of checkpointing and restoring so that they can properly handle the transition.
Checkpoint Restore
Checkpoint Restore is a project that can be used to pause and restore a Java Virtual Machine (JVM) based application at any point in the future, with extremely fast startup times and full performance. It eliminates the need for Hotspot identification, so there is no load on the CPU when it starts up.
Conclusion
This talk discussed the various ways to improve the efficiency of code when executing on a particular platform. It introduced techniques such as profile-guided optimization, monomorphic site inlining, speculative optimization, Ahead-of-Time static compilation, Application Class Data Sharing, and Checkpoint Restore. Checkpoint Restore can bring down startup times from around four seconds to 38 milliseconds for Spring Boot, from over one second to 46 milliseconds for Micronaut, and from 980 milliseconds to 33 milliseconds for Quarkus. This project has the potential to significantly improve the performance of Java-based applications.
You can also ask questions on the complete talk using Devoxx Insights