Uncontrollable out of heap memory in JVM source code analysis


I wrote an article about out of heap memory, Full interpretation of out of the heap memory in JVM source code analysis , which focuses on the principle of DirectByteBuffer. But today, we have a strange problem. On the premise of setting - XX:MaxDirectMemorySize=1G, we can count that the memory occupied by all DirectByteBuffer objects has reached 7G, which is far beyond the threshold value. This problem is very strange, so we have a good look at the reasons. Although it is our statistical problem at last, but Some other problems found during the period are worth sharing.

DirectByteBuffer constructor that has to be mentioned

Open the DirectByteBuffer class and we will find that there are five constructors

DirectByteBuffer(int cap);

DirectByteBuffer(long addr, int cap, Object ob);

private DirectByteBuffer(long addr, int cap);

protected DirectByteBuffer(int cap, long addr,FileDescriptor fd,Runnable unmapper);

DirectByteBuffer(DirectBuffer db, int mark, int pos, int lim, int cap,int off)

We create DirectByteBuffer objects from the java level, usually through the allocateDirect method of ByteBuffer

public static ByteBuffer allocateDirect(int capacity) {
        return new DirectByteBuffer(capacity);

That is, the first constructor mentioned above will be used, namely

DirectByteBuffer(int cap) {                   // package-private

        super(-1, 0, cap, cap);
        boolean pa = VM.isDirectMemoryPageAligned();
        int ps = Bits.pageSize();
        long size = Math.max(1L, (long)cap + (pa ? ps : 0));
        Bits.reserveMemory(size, cap);

        long base = 0;
        try {
            base = unsafe.allocateMemory(size);
        } catch (OutOfMemoryError x) {
            Bits.unreserveMemory(size, cap);
            throw x;
        unsafe.setMemory(base, size, (byte) 0);
        if (pa && (base % ps != 0)) {
            // Round up to page boundary
            address = base + ps - (base & (ps - 1));
        } else {
            address = base;
        cleaner = Cleaner.create(this, new Deallocator(base, size, cap));
        att = null;


The Bits.reserveMemory(size, cap) method in this constructor will check the threshold value of out of heap memory

static void reserveMemory(long size, int cap) {
        synchronized (Bits.class) {
            if (!memoryLimitSet && VM.isBooted()) {
                maxMemory = VM.maxDirectMemory();
                memoryLimitSet = true;
            // -XX:MaxDirectMemorySize limits the total capacity rather than the
            // actual memory usage, which will differ when buffers are page
            // aligned.
            if (cap <= maxMemory - totalCapacity) {
                reservedMemory += size;
                totalCapacity += cap;

        try {
        } catch (InterruptedException x) {
            // Restore interrupt status
        synchronized (Bits.class) {
            if (totalCapacity + cap > maxMemory)
                throw new OutOfMemoryError("Direct buffer memory");
            reservedMemory += size;
            totalCapacity += cap;


Therefore, when the allocated memory exceeds the threshold value, a gc action will be triggered and a new allocation will be made. If it still exceeds the threshold value, an OOM will be thrown, so the allocation action will fail. So from all of this, as long as - XX:MaxDirectMemorySize=1G is set, the threshold will not be exceeded, and you will see that GC will continue to be done.

On constructors

So what are the main situations in which other constructors are used?

We know that DirectByteBuffer recycling relies on the property of cleaner in it, but we find that there are several constructors where the property of cleaner is null, so how can they be recycled in this case?

Let's take a look at these two functions in DirectByteBuffer:

 public ByteBuffer slice() {
        int pos = this.position();
        int lim = this.limit();
        assert (pos <= lim);
        int rem = (pos <= lim ? lim - pos : 0);
        int off = (pos << 0);
        assert (off >= 0);
        return new DirectByteBuffer(this, -1, 0, rem, rem, off);

    public ByteBuffer duplicate() {
        return new DirectByteBuffer(this,

From the name and implementation, we can basically guess what it is. slice actually takes the rest of the memory out of a known block, points to it with a new DirectByteBuffer object, and duplicate is to create a new copy of the existing DirectByteBuffer. All the pointers are the same.

Therefore, from this implementation point of view, the subsequent associated out of heap memory is actually the same block, so if we only add up the capacity of all DirectByteBuffer objects when we do statistics, it may lead to a much larger result. This is actually the problem I checked. I originally set the threshold 1G, but found that the 7G effect was achieved. So the constructor used in this case can make the cleaner null, and the recycle depends on the original DirectByteBuffer object.

Forgotten examination

But there is another situation, which is also the focus of this article. In the jvm, you can call back the DirectByteBuffer constructor above through the jni method. This constructor is

private DirectByteBuffer(long addr, int cap) {
    super(-1, 0, cap, cap);
    address = addr;
    cleaner = null;
    att = null;

The jni method that calls this constructor is jni · newdirectbytebuffer

extern "C" jobject JNICALL jni_NewDirectByteBuffer(JNIEnv *env, void* address, jlong capacity)
  // thread_from_jni_environment() will block if VM is gone.
  JavaThread* thread = JavaThread::thread_from_jni_environment(env);

#ifndef USDT2
  DTRACE_PROBE3(hotspot_jni, NewDirectByteBuffer__entry, env, address, capacity);
#else /* USDT2 */
                                       env, address, capacity);
#endif /* USDT2 */

  if (!directBufferSupportInitializeEnded) {
    if (!initializeDirectBufferSupport(env, thread)) {
#ifndef USDT2
      DTRACE_PROBE1(hotspot_jni, NewDirectByteBuffer__return, NULL);
#else /* USDT2 */
#endif /* USDT2 */
      return NULL;

  // Being paranoid about accidental sign extension on address
  jlong addr = (jlong) ((uintptr_t) address);
  // NOTE that package-private DirectByteBuffer constructor currently
  // takes int capacity
  jint  cap  = (jint)  capacity;
  jobject ret = env->NewObject(directByteBufferClass, directByteBufferConstructor, addr, cap);
#ifndef USDT2
  DTRACE_PROBE1(hotspot_jni, NewDirectByteBuffer__return, ret);
#else /* USDT2 */
#endif /* USDT2 */
  return ret;

Imagine such a situation. We write a native method, which allocates a piece of memory. At the same time, we associate the method with a DirectByteBuffer object. From the java level, the DirectByteBuffer is indeed an effective object occupying a lot of native memory, but the associated memory behind the object completely bypasses the check of MaxDirectMemorySize , so it may also cause this phenomenon. It is clear that MaxDirectMemorySize is set, but it is found that the out of heap memory associated with DirectByteBuffer is actually larger than it.

Welcome to the PerfMa community and read:

Root and root -- Thinking about computer avalanche caused by an OOM experiment

Heavyweight: Interpretation of the latest JVM ecological report in 2020

Tags: Programming jvm Java REST

Posted on Wed, 04 Mar 2020 23:26:31 -0800 by gergy008