Ever tried to debug Java issues in Docker containers? How does one go about doing it? This article talks about one mechanism in Eclipse OpenJ9 that makes it a lot easier to obtain debug information from Java applications running inside Docker containers.
A running OpenJ9 JVM includes mechanisms for producing different types of diagnostic data when events of interest occur. In general, the production of this data happens by default and can also be controlled by using the command line JVM options such as -Xdump at JVM startup or by dynamically setting them using the com.ibm.jvm.Dump API. However, it is not easy to update the Java command line in Docker Images. Also, it would be far easier if we can connect to the running Docker container of interest and dynamically update the parameters for collecting the right data including setting the right -Xdump options. Would it be possible to do this dynamically in a running container? The answer at this time is a qualified yes, so let us look at the details.
Can a MXBean do it?
An MXBean makes it a lot easier to connect to and monitor remote applications than the Dump API. Also, MXBeans can be used in jconsole (or by any other monitoring tool / admin console) to dynamically configure the diagnostic options while monitoring the application, without having to restart the application.
A new MXBean, OpenJ9DiagnosticsMXBean has been implemented. This MXBean allows a user to dynamically configure the dump options (like what is passed during JVM startup using -Xdump) on a remote java application running inside a container or a host and trigger dump agents without having to restart the application.
Prior to this feature, to obtain diagnostic information (javacores, snap traces etc) we had to restart the application using the below command line:
Using the OpenJ9DiagnosticsMXBean now the user can dynamically specify these dump options without restarting the application for all dump events except catch and throw. Refer the Dump events section for the list of events. Refer the apidoc for more details on OpenJ9DiagnosticsMXBean.
Example Use Case
- A Java application running in a docker container in a cloud environment fails with repeated exceptions.
We have used IBM Cloud Private (ICP) as our cloud of choice. However, the following steps are applicable to any kubernetes based docker container orchestration system.
Using jconsole to monitor the remote application running on ICP
To demonstrate the usage of the MXBean, we have created a docker image of a HelloWorld servlet with Open Liberty server and the AdoptOpenJDK openjdk8-openj9 nightly docker build and pushed it to hub.docker.com, you can find it here.
In the servlet, we have simulated the below two events, on whose occurrence we would like to trigger the dump agents:
(1) Allocation of 1 KB object
Now, we need to deploy the liberty application to the cloud. For the steps to configure the Liberty server for JMX communication with the jconsole (JMX client) and to deploy to ICP, refer to the README at the github repo. Once deployed we can use jconsole JMX Client to connect to the remote application and invoke the methods in the OpenJ9DiagnosticsMXBean to configure the dump settings dynamically and to trigger dumps.
Connect to the remote liberty application on ICP using jconsole as below:
- To connect to the liberty application using jconsole we will need the restConnector.jar, which is present in the clients folder in the liberty package. You can download the liberty package from here.
- Copy the keystore.jks from the above git hub url to the directory from where jconsole will be launched.
- Launch jconsole using the below command (if on windows):
jconsole -J-Djava.class.path=%JAVA_HOME%/lib/jconsole.jar;%JAVA_HOME%/lib/tools.jar;%WLP_HOME%/clients/restConnector.jar -J-Djavax.net.ssl.trustStore=keystore.jks -J-Djavax.net.ssl.trustStorePassword=passw0rd -J-Djavax.net.ssl.trustStoreType=jks -J-Dcom.ibm.ws.jmx.connector.client.disableURLHostnameVerification=true
Where WLP_HOME is the directory where the restConnector.jar is extracted.
- In jconsole, specify the jmx url as below in the remote process and enter the user name “admin” and password “admin” as specified in the server.xml
service:jmx:rest://<ICP server IP>:<node port>/IBMJMXConnectorREST
Node port – 32337, is specified in the deployment yaml, this can be modified to use any other port if required
- Go to the MBeans tab and expand openj9.lang.management to find the OpenJ9DiagnosticsMXBean.
- Use the dumpOptions from attributes to dynamically set the dump options. Specify the dump options as below, to get a java dump on allocation of 1KB object
- Check the application logs from the linux machine by issuing the below command:
kubectl logs -f -c <container> <pod name>
kubectl logs -f -c hello-mxbean-vol hello-mxbean-vol-ddcbb8688-2fj2n
- Use the dump option below to get a java dump on thrstart event:
- Now, check the application logs. Dumps will be created on the shared persistent volume that was mounted to /var/log as specified in the deployment yaml file
- Invoke any of the other operations such as triggerDumpToFile to trigger any of the supported dump agents (java, heap, snap, system) as the first parameter and the filename with the mounted drive path as the second parameter.
Note – To configure the dump options dynamically for the dump agents to be triggered on catch and throw events, the application needs to be restarted with –Xdump:dynamic. Then, set the below dump option to get the required dump. For example, to trigger a java dump when java.io.UnsupportedEncodingException is caught set the following option:
As we have simulated the UnsupportedEncodingException in the HelloWorld servlet in the doGet(), access the application https://<ICPserver IP>:<Node Port>/HelloWorld/ for the exception event to occur. This should create the java dump on the persistent volume.
- Detailed demo with a walk through of the above steps available here.
- Post any questions or comments to the OpenJ9 slack workspace.
2 Replies to “Gather Diagnostic Data from your Containerized Java Application on the fly”
Thank you for working this out and sharing.