Did you know… that just by using OpenJ9 as your runtime, applications using deserialization could gain a huge performance advantage?
Java serialization provides a way to easily convert a sequence of bytes to and from a
One step that occurs during the process of deserialization is loading the class specified by the class descriptor in the stream of bytes. This involves calling two methods that contribute to slow performance:
This is a private method that finds the latest user defined classloader or “LUDCL” to use with
Class.forName. If first acquires VM access which means its safe to examine VM data structures and walk the stack. It then walks the stack to find the most recent
ClassLoader which is an expensive action.
In OpenJ9 there are two optimizations at work:
- Class caching: create a
java.io.ClassCacheto reduce calls to
java.lang.Class.forNamefor repeated lookups
- Cache “LUDCL”: The loader can be safely cached while in the
ObjectInputStreamclass. If custom
readObjectmethods are invoked during this process the LUDCL will need to be refreshed.
- JIT replacing
ObjectInputStream.readObject: To eliminate another LUDCL retrieval, the JIT will replace
ObjectInputStream.redirectedReadObject(ObjectInputStream iStream, Class caller).
ObjectInputStream.redirectedReadObjectwill provide the LUDCL information through an argument preventing extra calls to LUDCL.
Performance results for Java 8
How can I take advantage of this?
This deserialization goodness will be enabled by default starting with OpenJ9’s 0.18.0 release this January, 2020 for all Java versions.
In the meantime to ensure you are making the most of OpenJ9’s performance advantage you can enable the option with the
com.ibm.enableClassCaching property. See our for more details.