There has been a lot of talk about Java not behaving properly when running in a docker container. Many blogs like this cover the problems that Java faces in containers. Although these talks are targeting OpenJDKs that use the Hotspot JVM, such issues are mostly applicable to Eclipse OpenJ9 too.
The root cause of these problems can be attributed to the fact that the JVM does not understand resource limits imposed by the container. But that is no surprise because JVMs have been around a long time before containers entered the mainstream. Moreover, the standard mechanisms like sysconf, proc filesystem, sched_getaffinity() that the JVM uses to understand resource (memory and CPU) limits do not take into account the limits imposed by the containers.
Let’s quickly look at the problems OpenJ9 faces when running in a docker container.
Problem (1): Java heap exceeding container memory limit
OpenJ9 uses the amount of physical memory in the system to determine its default heap limit, even when running in container. Therefore, the size of the heap can exceed the container memory limit. When the JVM tries to access a page of memory that goes beyond the container limit, it gets killed by the OOM killer.
Problem (2): JIT scratch space exceeding container memory limit
A similar issue is present with respect to JIT scratch space – a transient block of memory used by JIT for compilations. However, this problem is different from setting the Java heap limit because the JIT does not reserve the memory for JIT scratch space upfront. The JIT calculates what it can use by checking the free physical memory using the proc file system. But proc fs does not reflect the memory usage of the container alone. Therefore, the JIT’s estimate of free physical memory may exceed the memory available to the container. Again, a recipe to get killed by the OOM killer.
Problem (3): Conservative heap size in container
If you are running on bare metal or a hypervisor, OpenJ9 (Java 9 and later) sets its heap size based on the values in the following table:
Physical Memory (P) | Heap size (H) |
<= 1G | P/2 |
1G – 2G | 512M |
>= 2G | P/4 |
These settings make sense when running on bare metal or a hypervisor where the JVM is likely to be sharing memory with a myriad of other processes. However, the use case for containers, specifically docker containers, generally follows the “single concern per container” design. In such scenarios, the JVM is likely to be the main consumer of memory. If the JVM uses just 1/4th of memory for heap when the container memory limit is say 4G, then it is under-utilizing the memory resources available at its disposal.
Problem (4): Overriding maximum heap size
The previous problem talked about the under utilization of container memory. This can be fixed easily if the JVM increases the default heap size to a higher fraction of the container limit. Although this setting can be overridden by using the -Xmx option, this is a static value that may not work for a different container that the application is deployed in. The user would have to adjust the -Xmx value whenever the memory limit of the container changes, which may be trivial but is still an annoying task!
One workaround is to use scripts like the one in this pull request which use an environment variable to determine the fraction of container memory to use and internally translate it to appropriate -Xmx value. fabric8io’s Java image uses a script with similar mechanism.
These mechanisms work but this kind of functionality should ideally be provided by JVM itself and not rely on developers writing up such scripts.
Problem (5): Behavior of Runtime.availableProcessors() and JVMTI API GetAvailableProcessors()
Containers can limit the amount of CPU available to a process by multiple mechanisms (in case of CFS scheduler):
1) Using cpuset to limit CPU cores that a process can use
2) Using quota and period to limit the amount of CPU cycles that a process can use in a given period
3) Using shares to specify the relative share of CPU available to the process
Currently, Runtime.availableProcessors() takes into account cpuset but not the other two mechanisms. GetAvailableProcessors() does not consider any of these mechanisms when computing available processors.
Problem (6): Number of GC and JIT threads are not based on CPU limits of the container
OpenJ9 spawns GC helper threads based on number of CPUs in the system. It takes into account cpuset but not the other two mechanisms for limiting the amount of CPU available.
The same holds true for JIT compilation threads; OpenJ9 spawns 7 JIT compilation threads but not all of them are active. JIT starts activating the threads based on the compilation load. However, the number of threads activated never goes above the number of CPUs in the system, which again, considers cpuset but not the other two mechanisms.
Over to solutions
Our list covers most of the prevalent problems that OpenJ9 faces. Now let’s see what we’re doing to address these.
Solution (1): Making OpenJ9 aware of container memory limits
OpenJ9 is now aware of the memory limit imposed by the container and adjusts its heap size accordingly. When the container limit is sufficiently large, OpenJ9 increases the percentage of memory to use for the heap. These new settings are shown in the following table:
Container Memory Limit (P) | Heap Size (H) |
<= 1G | P/2 |
1G < P < 2G | P-512M |
>= 2G | 3*P/4 |
These changes address problems (1) and (3) in our list.
At the time of writing, these changes are available in the 0.9.0 release of OpenJ9.
Solution (2): Allowing OpenJ9 to set percentage memory limits
In addition to changing the maximum heap size, OpenJ9 now supports the option –XX:MaxRAMPercentage. This option allows a user to override the maximum heap size as a percentage of the container memory limit. Obviously, this is a better approach than using static values with the –Xmx option. So, now if you want to use 80% container memory as heap, irrespective of what the actual memory limit is, you can do that by setting the –XX:MaxRAMPercentage=80 option.
There is also another new option –XX:InitialRAMPercentage to set the initial heap size based on the container memory limit. These two new options take care of problem (4) in our list. These options are the same as those added by Hotspot in Java 10, but with OpenJ9 you can use these options in Java 8 as well.
These changes are available in the 0.9.0 release of OpenJ9.
Solution (3): Making the JIT aware of available memory
The OpenJ9 community is also working on tuning the JIT to estimate the free memory both in the container and in the host and use the minimum of the two values to determine the size of the JIT scratch space.
Why do we take a minimum of the two values? Why not consider free memory in the container alone?
Remember that setting the memory limit on a container does not reserve memory for the container. Setting the memory limit restricts only the container memory usage.
Consider a container with a 2G memory limit, where the current memory usage in the container is 1G. That means this the container can still use 1G more of memory. However, it is possible that the free memory on the host is less than 1G. If the JIT makes the decision based solely on free memory in the container, it can cause an out of memory situation in the host. By picking the minimum of the two values, the situation is avoided.
This github issue is being worked on and when complete, it will resolve problem (2) in our list.
Solution (4): Making OpenJ9 aware of available CPUs
Problems (5) and (6) are closely related because the underlying mechanism for detecting available CPUs in the system is the same. OpenJ9 has now been updated to take into account CPU quota of the container to determine available CPUs in the container. That means the number of GC helper threads and JIT active threads are now calculated based on the CPU quota. Similarly, Runtime.availableProcessors() and GetAvailableProcessors() return values based on CPU quota of the container.
These changes are available in the 0.9.0 release of OpenJ9.
Note that all of the above changes in OpenJ9 are currently protected by option –XX:+UseContainerSupport. Options –XX:MaxRAMPercentage and –XX:InitialRAMPercentage can be used independently of –XX:+UseContainerSupport but to adjust heap size based on container memory limit –XX:+UseContainerSupport is required. Going forward, expect the –XX:+UseContainerSupport option to be enabled by default.
Docker Image
Although the container support is not enable by default, the Dockerfile for creating OpenJ9 docker images set -XX:+UseContainerSupport. So if you are using an OpenJ9 docker image, you don’t need to set -XX:+UseContainerSupport explicitly.
Hotspot has made such improvements for docker containers in the JDK 10 release. But if you want to use these features in Java 8, you can try using OpenJ9 based builds. At the time of writing, features which have completed are available in Java 8 release build which is currently based on OpenJ9 version 0.9.0.
As work progresses, this blog may be modified to reflect new changes in the OpenJ9 VM.
If you have any questions or concerns please post them in our OpenJ9 slack workspace. If you find any problems with OpenJ9 please raise an issue on github.
1 Reply to “Eclipse OpenJ9 in Containers”