With contributions from Hang Shao.

By now you’ve no doubt heard of Eclipse OpenJ9 and its reputation for starting applications faster and using less memory. If not, you should visit the Eclipse OpenJ9 Performance page. The Shared Class Cache (SCC) capability is one of the main reasons why OpenJ9 can start faster and use less memory, particularly if you have multiple Java Virtual Machines (JVMs) running at the same time.

The Shared Class Cache

The SCC is a file that contains Java classes, compiled machine code, and other data. It allows for faster start-up because classes are stored in an internal format that OpenJ9 can use immediately, rather than the more general .class format. Additionally, it contains compiled code which executes much faster than Java bytecode. The SCC also reduces memory usage because, as the name implies, it can be shared by all running JVMs. Without it, each JVM loads its own copies of its classes into memory. Class sharing in Eclipse OpenJ9 (IBM Developer, June 2018) is a thorough introduction to the SCC and gives great tips on how to make use of it.

The Multi-Layer Shared Class Cache

Multi-layer SCCs are a new feature in OpenJ9 0.17 that allows the OpenJ9 JVM to start up multiple SCCs at the same time. Instead of loading a single SCC, the JVM will load a hierarchy of SCCs. This hierarchy of SCCs forms a stack and each SCC in the stack represents a layer. The SCC in the bottom layer (layer 0) of the stack is independent, but every other layer depends on its lower layer(s) (i.e. layer N depends on layer 0 to layer N-1). Only the top layer can be modified, while all lower layers are read-only. Each layer can be sized independently.


If you’re an application or middleware developer who has used SCCs before, you may be wondering why this new feature is introduced. Surely, it’s easier to simply deal with a single SCC, isn’t it? This is certainly true if you distribute your code by conventional means. If, however, you distribute your application through Docker, then you’ll want to consider multi-layer SCCs.

Distributing SCCs in Docker

Before you do that, however, you’ll first want to consider distributing SCCs in your Docker images. Docker works best when everything that an application needs is contained in its image. By including an SCC in your image, you enable your containers to start faster immediately, from the first time they’re executed, in any environment. OpenJ9 class sharing in Docker containers (IBM Developer, February 2019) shows us how to do this and the better results we can achieve in start-up performance.

Now that you’re hopefully convinced that distributing SCCs in your Docker images is worth your while, we can finally talk about how you can take advantage of multi-layer SCCs for even more efficiency.

The COW Problem

Docker’s layered file system and the copy-on-write (COW) strategy it uses to manage modifications is well documented. COW strikes a good balance between efficiency and complexity when it comes to minimizing data duplication. It works well when applied to Docker images that have been carefully built to avoid including large writable files. Unfortunately in most cases, an application connecting to a shared cache will add sharable data into it. Usually the larger the SCC is, more data can be shared. Both factors work against us when we include SCCs in our Docker images because our SCCs will be copied repeatedly, leading to the same data being duplicated multiple times across images.

One Size Won’t Fit All

Another problem crops up if our image is used as a base for another image. To allow that image to cache it’s data together with ours we have to leave some empty space in the SCC. This means that we have to use a larger SCC than we otherwise need, which compounds our data duplication problem. How much free space would we need? It’s difficult to predict and needs at least some level of coordination between images.

All told, all of this extra data has a real cost that undercuts some of the benefits of including SCCs in our Docker images. Larger images take longer to pull and take more disk space to store. If you distribute a particularly popular base image that includes an SCC the effects will be even more widespread.

Layers To The Rescue

Multi-layer SCCs address both of the problems we’ve discussed. They avoid the COW mechanism. Each layer of the SCC is a separate file on disk. Since only the top-most layer is writable, creating a new layer each time a container is started leaves all of the lower layers untouched and, therefore, COW won’t be triggered on all lower layers. Multi-layer SCCs also allow us to size our cache layers individually. With a multi-layer SCC each image can size its SCC layer independently for its own needs. Subsequent images can simply create their own SCC layers, sized to their needs.


To demonstrate the benefits of this new feature, let’s consider the following Docker scenario: we want to build a Docker image containing OpenJ9. We want to use it as a base for another image containing a Java application server, like Open Liberty, and we want to include an SCC. Finally, we want to create an image for our application using the app server image as a base, and we want our application’s classes in the cache as well.

We implement two variations of the above scenario using both a single-layer SCC and a multi-layer SCC and compare the resulting images with respect to 1) how much data is transferred when the image is pulled from a Docker registry like DockerHub, and 2) how much data is stored on disk once the image is installed.

What’s the difference between #1 and #2, you ask? Docker images pulled from a Docker registry are compressed while in flight, and typically uncompressed on disk. Measuring just one won’t usually give you an accurate sense of the other.


A graph showing a noticeable reduction in the amount of data pulled when using a multi-layer SCC vs. a single layer SCC.
Note: -Xscmx<size> is OpenJ9’s command line option for specifying the size of the SCC.

In the first variation of this scenario we’ve used a fixed size cache for both the single- and multi-layer SCC tests, which means that in the case of the single-layer SCC we have a single 80 MiB cache and in the case of the multi-layer SCC, each layer is 80 MiB. Despite that, we can see that both the single- and multi-layer SCCs end up being the same size on disk in the final image. Why? Because even though the single-layer SCC is a single file, it is written to in both layers and is therefore copied into the second layer, which means that in effect the final image has two copies of the SCC. We can also see that in both cases much of the space is empty. Sure, we can adjust the initial size of the SCCs to reduce this empty space now that we’ve conducted the experiment, but we only have this opportunity because we’re authoring all of the images. If you don’t know who or how many images will use your image as a base you can’t truly pick an optimal size of cache ahead of time.

The amount of data transferred shows us something different. First, in both cases the amount of data transferred is less than the size of the SCCs on disk. Thank you, compression. Second, the multi-layer SCC lets us transfer much less data. Why? Because once we compress away most of the empty space, we see that the single-layer SCC has left us with more data to transfer, much of it duplicate and unnecessary.

A graph showing a noticeable reduction in the size of the SCC on disk and a similar reduction in the amount of data pulled when using a multi-layer SCC vs. a single layer SCC.

In the second variation of this scenario we’ve 1) created the SCC in the OpenJ9 image rather than the application server image, and 2) taken the opportunity to size our layers individually in the case of the multi-layer SCC test to reduce the amount of empty space. The payoff is in the noticeable reduction of disk usage, while the amount of data transferred remains consistent with what we saw in the previous variation. At this point you may be thinking that having to size layers is not too different from having to size a single-layer SCC as previously discussed. If so, you’re right, it requires that we somehow know how much space we will need in advance of creating the layer. The difference is that we can make a purely “local” decision, based on the content of our image, without being concerned about dependent images. Dependent images can simply create their own layers in the SCC, sized to their requirements. The only things images have to agree on now is the name and location of the SCC.


OK, so by now you’re convinced that using multi-layered SCCs in your Docker images is worth its while, but how do you do that? If you’re familiar with OpenJ9’s SCC and know your way around the command line options to enable it all you really need to know about is a new sub-option. (If you need a refresher, head on over to OpenJ9’s -Xshareclasses documentation page, or the Class Data Sharing landing page for links to everything concerning SCCs.)

The -Xshareclasses:createLayer sub-option creates a new layer in the SCC that is to be used. Together with the -Xscmx<size> option you can create a new layer of a particular size.

The -Xshareclasses:layer=<N> sub-option does one of two things: If the SCC to be used has N or more layers it specifies that layer N is to be the writable layer. By default, the writable layer will be the top-most layer, so you don’t need to use this option to specify the top layer. Also note that modifying a lower layer will invalidate any layers on top. If the SCC to be used has N-1 layers it creates a new layer N and is equivalent to createLayer. If the SCC has less than N-1 layers OpenJ9 will emit an error since you can’t create an Nth layer unless the preceding N-1 layers have already been created.

This sub-option can be useful in certain situations that createLayer can’t cope with, such as when multiple JVMs have to start concurrently and you can’t predict which will be first and don’t want them to each create a new layer, but it requires that you know how many layers already exist in the SCC. You can use -Xshareclasses:listAllCaches to find out the current top layer number. If possible, create a layer beforehand using the much simpler createLayer sub-option instead.

There are also a few new -Xshareclasses command line utilities for the multi-layer SCCs. You can use -Xshareclasses:name=cacheName,printTopLayerStats[=option[+s]] to check the statistics of the top layer cache. For example -Xshareclasses: name=cacheName,printTopLayerStats=romclass gives you all the classes in the top layer. -Xshareclasses:name=cacheName,printStats[=option[+s]] shows statistics from all layers. Check -Xshareclasses:printStats=help and -Xshareclasses:printTopLayerStats=help for more available sub-options.

You can run -Xshareclasses:name=cacheName,destroy to destroy the top layer and -Xshareclasses:name=cacheName,destroyAllLayers to destroy all layers.

Implementation Details

When creating a new SCC layer, the JVM stores a unique ID and the status of all its lower layer into the new layer. Every time the top layer is started up, the JVM ensures that there have been no modifications to the lower layers. As any modification to a lower layer cache will invalidate all its higher layers, you can directly set the file permission of the shared cache to read-only to prevent this from happening.

We also allow cross-layer data references in the multi-layer SCCs. As the cache knows about all its lower layers, but not its higher layers, data in the higher layer cache can reference data in the lower layers. This is one of the reasons why we don’t allow modification to a lower layer.


In conclusion, the new multi-layer SCC feature in OpenJ9 eliminates some of the complexity that comes with distributing SCCs in Docker images and eliminates unnecessary duplication of data and empty space in the SCCs in your Docker images. Your Docker containers will start faster and use less memory immediately, wherever they are deployed.

If you still have questions, please see the following mini-FAQ.

If you’re feeling inspired and want to experiment with multi-layer SCCs in your own Docker images, please see the caveats section at the end of this blog post.

Mini FAQ

Q: Isn’t it easier to include a single-layer SCC in my application image instead of in a base image?

A: Yes, this solution can work if you also author your base images and if your base images are not intended to run standalone. If, however, you depend on base images authored by someone else and/or if those images can run standalone, introducing single-layer SCCs into the picture will inevitably lead to larger than necessary images. Multi-layer SCCs, on the other hand, mirror Docker’s own layered image structure and consequently work better than single-layer SCCs in the Docker architecture.

Q: I want to include an SCC in my Docker images, how exactly do I fill it?

A: If your image contains an executable program like an application server or a complete application, run your program, or at least start it and stop it one or more times at image build-time. If your image doesn’t contain anything executable (for example, if you’re distributing a library) you should consider letting the users of your image deal with SCCs.

Things To Remember

By distributing an SCC in your Docker images you’ll be distributing Ahead-of-time (AOT) compiled machine code. (OpenJ9 refers to compiled code stored in the SCC as AOT compiled code, which is different from Just-in-time (JIT) compiled code that is created each time the JVM runs and thrown away once the application terminates.) Unlike Java bytecode, machine code is not “build once, run anywhere.” This means that you should be aware of several things:

  1. The AOT compiled code created on one machine architecture will not be usable on a machine with a different architecture. If you create your SCC on an amd64 machine, the AOT compiled code saved to the SCC won’t be able to run it on an aarch64 or ppc64 machine, or even an i386 machine.
  2. AOT compiled code created on one operating system (OS) will not be usable on another.
  3. The AOT compiled code produced by OpenJ9 is currently very tailored to the processor running the JVM and may not be backward compatible with older processors.
  4. If OpenJ9 determines that the AOT compiled code in the SCC is not compatible with the architecture, OS, or processor executing the JVM the AOT compiled code won’t be used, but most of the rest of the data in the SCC will still be used.

In practice, Docker images are typically specific to architecture and OS anyway, so if you follow this convention all you have to worry about is point #3. A future version of OpenJ9 will address this particular issue, but for the time being you should create your Docker images on the oldest generation of processor you intend to support for best results.

If your Docker images are not specific to processor architecture and OS you can tell OpenJ9 to not include AOT compiled code in the SCC by using the -Xnoaot option. You’ll sacrifice some start-up improvement for this, but should still come out ahead vs. not using an SCC at all.


Leave a Reply