Why am I writing this?
It’s been a year since the start of my internship with Eclipse OpenJ9 team. During this time, I’ve learned so much about compilers, cloud computing, working with a team, and even found a new hobby. Today, I want to reflect on this experience and tell you what it’s like to work on OpenJ9 team. This post might be of interest to students or anyone else considering working on OpenJ9.
Finding and starting the job
I started this job after finishing my third year at University of Toronto. My school’s internship program is unusual; instead of multiple 4-8 month internships, we work at a single company for 12-16 months after second or third year of school. I decided to do my work term with OpenJ9 because I really enjoyed systems programming and OS courses in school, so working on a compiler sounded similar and pretty interesting. Another reason is that OpenJ9 is open-sourced, so all my work would be available publicly, which I thought would be a nice thing to have on my resume.
I was really excited to start my first real internship, but at the same time I was scared because I did not know anything about compilers. No one expected me to know much, but still, I was worried that it would be hard to keep up with the workload. At first I was overwhelmed with the amount of stuff I needed to learn, but things worked out well in the end. All thanks to a great team who taught me most things I needed and promptly answered any questions I had. Furthermore, the OpenJ9 team regularly hosts learning events such as lunch-and-learn, compiler vitality talks, workshops and more. These were really helpful, because compilers are complicated, and the OpenJ9 code base is old, large, and complex, and, unfortunately, not all components have guides and documentation available.
What is JIT-as-a-Service?

In short, JIT-as-a-service (JITaaS) decouples the JIT compiler from JVM and makes it its own process, turning a Java process into a client-server pair. When client VM wants to compile a method, it sends a request to the server. Server compiles it, and sends compiled code back to the client. Once the client receives the compiled code, it relocates and installs it in the code cache. During compilation, the server has to make additional queries to the client, to acquire runtime-specific information about loaded classes, VM state, etc. that is required for compilation. JITaaS is a prototype project that is currently in active development and is completely functional: the server can compile methods using most optimizations available in the traditional OpenJ9 JIT, and it can compile for an Ahead Of Time (AOT) mode of execution.

The main benefits: smaller memory footprint and CPU consumption on the client, without sacrificing throughput, enable us to run Java programs in memory and CPU constrained environments, e.g. small Docker containers.
The main drawback: compilations take more time due to latency, which leads to slower startup of the client.
As you can probably tell, aside from the challenges of decoupling the compiler from the rest of the JVM, this project also faces challenges associated with working on a cloud service (e.g. latency).


Contributing to JIT-as-a-Service
It took me a few months to get up to speed, but in the meantime, I still got to work on some interesting items. Probably the best thing about this internship is that I work on many challenging and important problems and enjoy solving them. I am never doing dull, meaningless work typically associated with interning at a large company. Oftentimes, I can even choose which item to work on.
When I started my internship, one of the main advantages – lower CPU consumption on the client – was non-existent. JITaaS client consumed around 300% more CPU than regular JIT and we also lagged behind in throughput by 20%. Today, one year later, JITaaS consumes 60-70% less CPU than regular JIT and we are pretty much on par in terms of throughput. I am proud to say that I contributed to this big improvement. How did we achieve it? We’ve done lots of things, but in this post, I want to focus on CPU improvements, as it is something I spent a lot of time working on.
Why was CPU consumption so high in the first place? After all, if compilation is outsourced to the server, then most of the CPU consumption should be just for running the Java program itself. The problem is that server requests lots of runtime-specific information from the client during compilation, which needs to be retrieved on the client and then sent over the network back to the server. A year ago, one compilation required ~300 such message pairs on average. Sending that much data over the network takes a surprisingly large number of CPU cycles. Thus, it seems obvious what needed to be done to reduce CPU consumption – reduce the number of remote queries.
There are 2 main approaches we used to achieve the above goal:
- Cache results of queries on the server.
- Change compiler code to request multiple items in one query, instead of making a new query for each item.
Using these 2 strategies in combination with a few others, we were able to go down to ~100 message pairs per compilation.
One simple example of caching: J9 classes. A J9Class
object represents a Java class. It stores a lot of information about a class: pointer to its superclass, pointer to the constant pool, class flags with meta data, etc. J9 classes are loaded by the VM on the client, so with no caching, if server wants to get the superclass pointer of java/lang/String
, it will have to send a query to the client, wait for the response, read it, and only then return the result. However, we know that as long as class is loaded, its superclass cannot change; so once we obtain the result the first time, we can cache it on the server for the entire lifetime of a client. The next time the same query is made, we just retrieve the answer from the cache.
Unfortunately, not all things are so easily cacheable. Oftentimes, information being queried can change in-between compilations, so we cannot cache it indefinitely. In some cases, it’s worthwhile to cache for the duration of a single compilation. Even though the cache will not exist for long, we can still get high hit rates and a big reduction in the number of queries. Resolved methods is an example of this. A resolved method represents a Java method, and contains lots of information that can change in-between compilations, e.g. whether a method is compiled or interpreted. I implemented local, per-compilation caching for resolved methods, and it still resulted in a significant reduction in CPU consumption, despite most caches existing for very short periods of time.
Caching was not the only thing I worked on. I also implemented new features, such as enabling support for optimizations in JITaaS (e.g. static final field folding), fixed bugs, improved performance, and wrote scripts (mostly in Python) to analyze logs produced by the compiler. For instance, I wrote a script that, given a list of all the queries sent by the server, finds the most frequent sequences of queries. This proved to be useful for identifying opportunities where we could merge multiple queries into a single query, further reducing CPU usage.
What’s next
I still have 4 months of internship left, during which I’ll continue working on JITaaS. One of the main goals is to merge JITaaS code into the master
branch (it is currently on a separate jitaas
branch) and make it into an actual product that will bring value to the customers. Our team is getting new interns who are starting soon, and one of my main objectives will be training them, just like a previous intern trained me. I’m grateful that I got a chance to work on OpenJ9. If you want to contribute, see the OpenJ9 Contributing guide, and feel free to get started by browsing the code here. See the OpenJ9 website for how to join us on Slack where we are active in the #jitaas channel.
This was a lovely blog poost