Containers are increasingly becoming the preferred way of deploying applications. They facilitate consistency between development and operations, allow for an easy way to build layers for reuse, and make it very easy for users to try out an application without having to deal with setting up all the prerequisites.
However, just because applications have moved to containers doesn’t mean that we can forget about performance; it still remains as important as ever. OpenJ9 offers better startup and footprint performance when compared to HotSpot. Let’s put it to the test by seeing how it performs in Docker by measuring the performance of Jenkins, an open source automation server that is conveniently available in docker images,
First, let’s grab the Docker images:
$ docker pull docker.io/dsouzai/jenkins:jenkins_hotspot_initialized $ docker pull docker.io/dsouzai/jenkins:jenkins_hotspot_initialized_cds $ docker pull docker.io/dsouzai/jenkins:jenkins_openj9_initialized $ docker pull docker.io/dsouzai/jenkins:jenkins_openj9_initialized_scc
I didn’t use the docker images from Docker Hub, but instead built my own (for reasons described further below). All of these images have Jenkins initialized; by this I mean the default plugins were installed and a dummy Jenkins admin user was created (username:
jenkins_hotspot_initialized: Jenkins running on OpenJDK8 with Hotspot.
jenkins_hotspot_initialized_cds: Jenkins running on OpenJDK8 with Hotspot using the file generated from
jenkins_openj9_initialized: Jenkins running on OpenJDK8 with OpenJ9.
jenkins_openj9_initialized_scc: Jenkins running on OpenJDK8 with OpenJ9 with a Shared Class Cache (SCC) containing Ahead of Time (AOT) compiled code.
To run any of these images:
$ docker run -u root -p 8080:8080 --cpuset-cpus="0-3" --cpuset-mems="0" -d <image>
--cpuset-mems are used to ensure we only run on 4 CPUs, to normalize the comparison. Startup was then measured as the time between when the container was created and when the “Jenkins is fully up and running text” was outputted by Jenkins (via
docker logs -t). Using the time when the container was created is really the only value I could use because the docker image is set up to launch Jenkins on creation, i.e.:
$ docker inspect <container> ... "Entrypoint": [ "/sbin/tini", "--", "/usr/local/bin/jenkins.sh" ], ...
Footprint was measured by reading the
MEM USAGE value outputted by
docker stats. Now while this seems to be a reasonable way of measuring container footprint, ideally, one would measure footprint as a sum of:
ps -orss --no-headers --pid <java pid>.
- The difference in the value of the Huge Pages used (from
/proc/meminfo) before and after Jenkins startup.
However, because, as mentioned above, Jenkins is launched immediately when the container is created, there isn’t an easy way of getting the Huge Pages consumed before the java process. So, to reduce the risk of potentially reporting the wrong footprint value, as well as to keep things simple, I opted for using the information provided by
docker stats. That said, if anyone out there knows of a better way of measuring footprint in this scenario, feel free to contact me.
So, without further ado:
|Average Startup Time (ms)||Average Footprint (KiB)|
These results were collected on a Haswell (
Intel(R) Xeon(R) CPU E7-8867 v3 @ 2.50GHz).
Even though the docker image size is bigger (partially because of
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE dsouzai/jenkins jenkins_openj9_initialized_scc 116a25cbdcaf 5 minutes ago 860 MB dsouzai/jenkins jenkins_openj9_initialized f45cb1bf841e 2 hours ago 774 MB dsouzai/jenkins jenkins_hotspot_initialized 0862e90b58ce About an hour ago 780 MB dsouzai/jenkins jenkins_hotspot_initialized_cds d3325b072a32 About a minute ago 802 MB
jenkins_openj9_initialized_scc makes up for it by having a lower runtime footprint as well as faster startup; a benefit that comes from the primed SCC that’s embedded in the image. To learn more about the SCC, see this doc; to learn more about AOT, see this blog series.
I should mention an important caveat; in order to ensure that the JVM can use the AOT code in the embedded SCC, the image has to be run on machine with a compatible processor; otherwise, the JVM will refuse to load the code. I’ve successfully run the image on a Haswell. YMMV with older processors.
With all that said, I welcome others to try out these images to see the benefits of OpenJ9’s AOT and SCC technology.
Now, there may be those of you who are interested in how the images I measured were created in the first place (and why). The rest of this post goes over how and why I built these images.
The Jenkins docker image has the Jenkins home directory as a volume. This means that unless the
-v flag is specified when starting the container to connect to a directory on the host machine, any plugins you install will not persist when you commit the container; additionally, the
-v flag will result in the plugins persisting on the host machine. However, I wanted the plugins to persist in the image.
To address this behaviour, I had to build the Jenkins docker image manually. I forked the
jenkinsci/docker git repo and made the change to not make Jenkins home a volume, as well as to grab the latest Jenkins war file. You can find my changes here.
To build the base Jenkins image:
$ git clone email@example.com:dsouzai/jenkins_docker.git $ cd jenkins_docker $ git checkout hotspot $ docker build -t dsouzai/jenkins:jenkins_hotspot --file Dockerfile .
However, this will set up the Jenkins docker image to use OpenJDK8 with Hotspot. Next, I had to build an image to use OpenJDK8 with OpenJ9. I also had to build the base AdoptOpenJDK OpenJ9 layer because the latest OpenJ9 release does not contain a fix relevant for this blog post. You can find the necessary changes for OpenJ9 here and here.
To build the base Jenkins image to use OpenJDK8 with OpenJ9:
# Build OpenJ9 Base Layer $ cd .. $ git clone firstname.lastname@example.org:dsouzai/openjdk-docker.git $ cd openjdk-docker $ git checkout jenkins_openj9 $ docker build -t dsouzai/jenkins:openjdk8-openj9-nightly --file 8/jdk/ubuntu/Dockerfile.openj9.nightly.full . # Build Jenkins with OpenJ9 $ cd ../jenkins_docker $ git checkout openj9 $ docker build -t dsouzai/jenkins:jenkins_openj9 --file Dockerfile .
Now that I had the base images, I could create the images wherein Jenkins is initialized by completing the setup and installing the default plugins. First, I ran the container:
$ docker run -u root -p 8080:8080 --cpuset-cpus="0-3" --cpuset-mems="0" -d dsouzai/jenkins:jenkins_hotspot
Next, I headed over to
<hostname>:8080 in my browser. Jenkins asks for a password, which can be retrieved via
$ docker exec <container> cat /var/jenkins_home/secrets/initialAdminPassword
Finally, I installed the default plugins, created a dummy Admin User, and finished the remainder of the setup. With this done, I committed the container to create the next layer:
$ docker commit <container> dsouzai/jenkins:jenkins_hotspot_initialized
I performed the same steps above with the
jenkins_openj9 image to create the
jenkins_openj9_initialized image. I was now done with OpenJ9 and Hotspot. However, I also wanted to create an OpenJ9 image that was primed with a SCC in order to make use of AOT compiled code, to see how Jenkins could benefit from it.
Therefore, before shutting down
jenkins_openj9_initialized, I edited
/usr/local/bin/jenkins.sh and added
-Xshareclasses:name=jenkins_scc,enableBCI -Xscmx80M to the
java command. Then, I exited the container and committed it
$ docker commit <container> dsouzai/jenkins:jenkins_openj9_initialized_scc_tmp
Next, I stopped the running container and created a new one based off
jenkins_openj9_initialized_scc_tmp. Once Jenkins started up, all I had to do was commit this container to create an image that had the SCC “baked” into it. However, because these are containers which don’t persist, I figured I might as well make it more efficient.
I opened Jenkins on my browser, logged in, and navigated around the various pages. I did this to further populate the cache (to simulate what a user might do on the first run, when they might be configuring their Jenkins instance). Finally, I edited
/usr/local/bin/jenkins.sh once again, this time adding
,readonly to the
-Xshareclasses flag. This ensured that we don’t add new AOT code to the SCC, as this wouldn’t provide any benefit to the container. With this done, I exited the container and committed it to create the final
Finally, I built
$ docker run -u root -p 8080:8080 --cpuset-cpus="0-3" --cpuset-mems="0" -d dsouzai/jenkins:jenkins_hotspot_initialized $ docker exec it <container> bash <in container> $ java -Xshare:dump <in container> $ exit $ docker commit <container> dsouzai/jenkins:jenkins_hotspot_initialized_cds