In Eclipse OpenJ9 release Version 0.9.0, a new gcpolicy:nogc is supported for Java 8 and above. The new gcpolicy:nogc is implemented based on the requirement of JDK Enhancement Proposal 318: Epsilon: A No-Op Garbage Collector. It maintains a specific garbage collector that handles only memory allocation and heap expansion, but doesn’t actually reclaim any memory back. It delivers limit bounded allocation and minimum runtime performance overhead. Once the available Java heap is exhausted, an OutOfMemoryError exception is triggered and the JVM will shut down. This new GC mode eliminates GC pause and most overhead on allocations. It is expected to provide benefits for “garbage-free” ultra-performant or short-lived applications, special testing and last-drop GC performance improvements.
Usage and Precautions
- The nogc mode is only activated by the OpenJ9 java option -Xgcpolicy:nogc or HotSpot compatible option -XX:+UseNoGC.
- Set customized maximum and minimum heap size via -Xmx, -Xms.
- Use -Xminf (minimum percentage of heap to remain free, default:30%), -Xmaxe (maximum expansion amount, default:0, meaning unbounded) and -Xmine (minimum expansion amount, default:1MB) for tuning heap expansion.
- Use -Xgc:tlhMaximumSize (default: 131072), -Xgc:tlhInitialSize (default: 2048) and -Xgc:tlhMinimumSize (default: 512) for tuning thread local Heap to reduce heap allocation contention among mutator threads.
- All the dump files (system, heap, java, snap) are generated on java/lang/OutOfMemoryError by default for providing information to figure out the cause, if the user doesn’t expect dumps on heap exhausted case, specify -Xdump:none:events=systhrow,filter=java/lang/OutOfMemoryError to turn off dumps on OOM.
- Use -Xverbose:gc to enable verbose garbage collection logging for the details on collection strategies and memory management.
Sample of verbose gc log for gcpolicy:nogc
<exclusive-start id="218" timestamp="2018-05-09T16:27:33.255" intervalms="10243.465">
<response-info timems="0.033" idlems="0.024" threads="1" lastid="0000000002648B00" lastname="Thread-7" />
</exclusive-start>
<af-start id="219" threadId="00000000027E1B60" totalBytesRequested="168" timestamp="2018-05-09T16:27:33.255" intervalms="10243.447" />
<cycle-start id="220" type="epsilon" contextid="0" timestamp="2018-05-09T16:27:33.255" intervalms="69764.940" />
<gc-start id="221" type="epsilon" contextid="220" timestamp="2018-05-09T16:27:33.255">
<mem-info id="222" free="0" total="5191237632" percent="0">
<mem type="tenure" free="0" total="5191237632" percent="0" />
</mem-info>
</gc-start>
<allocation-stats totalBytes="1556736928" >
<allocated-bytes non-tlh="0" tlh="1556736928" />
<largest-consumer threadName="Thread-7" threadId="0000000002648B00" bytes="779159832" />
</allocation-stats>
<heap-resize id="223" type="expand" space="tenure" amount="2224881664" count="1" timems="0.120" reason="satisfy allocation request" timestamp="2018-05-09T16:27:33.255" />
<gc-end id="224" type="epsilon" contextid="220" durationms="0.172" usertimems="0.000" systemtimems="0.000" timestamp="2018-05-09T16:27:33.256" activeThreads="1">
<mem-info id="225" free="2224750592" total="7416119296" percent="29">
<mem type="tenure" free="2224750592" total="7416119296" percent="29" />
</mem-info>
</gc-end>
<cycle-end id="226" type="epsilon" contextid="220" timestamp="2018-05-09T16:27:33.256" />
<allocation-satisfied id="227" threadId="00000000027E1200" bytesRequested="168" />
<af-end id="228" timestamp="2018-05-09T16:27:33.256" threadId="00000000027E1B60" success="true" />
<exclusive-end id="229" timestamp="2018-05-09T16:27:33.256" durationms="0.375" />
<exclusive-start id="230" timestamp="2018-05-09T16:27:48.403" intervalms="15147.591">
<response-info timems="0.036" idlems="0.027" threads="2" lastid="00000000027E1200" lastname="Thread-9" />
</exclusive-start>
<af-start id="231" threadId="0000000002649460" totalBytesRequested="168" timestamp="2018-05-09T16:27:48.403" intervalms="15147.604" />
<cycle-start id="232" type="epsilon" contextid="0" timestamp="2018-05-09T16:27:48.403" intervalms="84912.544" />
<gc-start id="233" type="epsilon" contextid="232" timestamp="2018-05-09T16:27:48.403">
<mem-info id="234" free="0" total="7416119296" percent="0">
<mem type="tenure" free="0" total="7416119296" percent="0" />
</mem-info>
</gc-start>
<allocation-stats totalBytes="2223953536" >
<allocated-bytes non-tlh="131144" tlh="2223822392" />
<largest-consumer threadName="main" threadId="0000000002558200" bytes="40883480" />
</allocation-stats>
<heap-resize id="235" type="expand" space="tenure" amount="3178364928" count="1" timems="0.279" reason="satisfy allocation request" timestamp="2018-05-09T16:27:48.403" />
<gc-end id="236" type="epsilon" contextid="232" durationms="0.345" usertimems="0.000" systemtimems="0.000" timestamp="2018-05-09T16:27:48.403" activeThreads="1">
<mem-info id="237" free="3178233856" total="10594484224" percent="29">
<mem type="tenure" free="3178233856" total="10594484224" percent="29" />
</mem-info>
</gc-end>
<cycle-end id="238" type="epsilon" contextid="232" timestamp="2018-05-09T16:27:48.403" />
<allocation-satisfied id="239" threadId="0000000002648B00" bytesRequested="168" />
<af-end id="240" timestamp="2018-05-09T16:27:48.403" threadId="0000000002649460" success="true" />
<exclusive-end id="241" timestamp="2018-05-09T16:27:48.403" durationms="0.562" />
- The User also can monitor the nogc collector via the Java Management eXtension (JMX) interfaces.
Sample JConsole screenshots of an application running under nogc mode
- Be cautious when using finalization, direct memory access and soft weak phantom references in application running under nogc mode. Due to no reclaim processing in nogc mode, the resources and memory will never be cleared or released.
Potential use cases
This is unusual GC mode and it is not suitable for regular java applications, but it is perfect fit for some specific cases. The JEP:318 explicitly lists six possible use cases in Motivation section for epsilon GC. Those use cases are also fit for gcpolicy:nogc, so copy the paragraphs of those use cases here for the reference, some part of it has been changed to match OpenJ9 implementation.
Details are listed below for the reference.
Performance testing
Having a GC that does almost nothing is a useful tool to do differential performance analysis for other, real GCs. Having a nogc mode can help to filter out GC-induced performance artifacts, like GC workers scheduling, GC barriers costs, GC cycles triggered at unfortunate times, locality changes, etc. Moreover, there are latency artifacts that are not GC-induced (e.g. scheduling hiccups, compiler transition hiccups, etc), and removing the GC-induced artifacts help to contrast those. For example, having the nogc mode allows to estimate the natural “background” latency baseline for low-latency GC work.
Memory pressure testing
For Java code testing, a way to establish a threshold for allocated memory is useful to assert memory pressure invariants. Today, we have to pick up the allocation data from MXBeans, or even resort to parsing GC logs. Having a GC that accepts only the bounded number of allocations, and fails on heap exhaustion, simplifies testing. For example, knowing that test should allocate no more than 1 GB of memory, we can configure gcpolicy:nogc with -Xmx1g, and let it crash with a heap dump if that constraint is violated.
VM interface testing
For VM development purposes, having a simple GC helps to understand the absolute minimum required from the VM-GC interface to have a functional allocator. For nogc mode, the interface should not have anything implemented.
Extremely short-lived jobs
A short-lived job might rely on exiting quickly to free the resources (e.g. heap memory). In this case, accepting the GC cycle to futilely clean up the heap is a waste of time, because the heap would be freed on exit anyway. Note that the GC cycle might take a while, because it would depend on the amount of live data in the heap, which can be a lot.
Last-drop latency improvements
For ultra-latency-sensitive applications, where developers are conscious about memory allocations and know the application memory footprint exactly, or even have (almost) completely garbage-free applications, accepting the GC cycle might be a design issue. There are also cases when restarting the JVM – letting load balancers figure out failover – is sometimes a better recovery strategy than accepting a GC cycle. In those applications, long GC cycle may be considered the wrong thing to do, because that prolongs the detection of the failure, and ultimately delays recovery.
Last-drop throughput improvements
Even for non-allocating workloads, the choice of GC means possibly choosing the set of GC barriers that the workload has to use, even if no GC cycle actually happens. Avoiding this barrier can bring the last bit of throughput improvement.
1 Reply to “gcpolicy:nogc”