DirectByteBuffer and File IO Details

In the java.nio package is a new API that Java uses to process IO. It uses channel, select and other models to re-implement IO operations.

DirectByteBuffer is one of the classes under the nio package.This class is used to save byte arrays, in particular because it stores data in out-of-heap memory.Unlike traditional objects, objects are in the heap.This has the advantage of reducing the number of memory copies for IO operations, thereby increasing efficiency.This is explained in the file IO

Let's start with the conclusion here:

a. The number of times data needs to be copied for traditional IO operations (i.e., using the api of the java.io package) to access disk files:

1. Data copy kernel page cache for disk files

2. Kernel data copy application space (i.e. jvm out-of-heap memory)

3. jvm out-of-heap memory copy jvm in-heap memory

Why do 2, 3 and 3 not merge and copy the kernel data into jvm heap memory.Because gc happens when jvm makes a system call to read a file, the corresponding address of heap memory will be moved, so copying directly into the heap is problematic.

b. The number of times data needs to be copied to access a disk file using DirectByteBuffer:

1. Data copy kernel page cache for disk files

2. Kernel data copy application space (DirectByteBuffer)

So DirectByteBuffer reduces the number of memory copies.

1. Traditional file IO parsing

Example file reading:

FileInputStream input = new FileInputStream("/data");

byte[] b = new byte[SIZE];

input.read(b);

The byte array represents the heap memory object, where data is copied to the jvm heap memory.Let's look at the internal implementation of the read function

public int read(byte b[]) throws IOException { 

    return readBytes(b, 0, b.length);

}

private native int readBytes(byte b[], int off, int len) throws IOException;

We see that the read function ultimately calls the native function readBytes.

jintreadBytes(JNIEnv *env, jobject this, jbyteArray bytes,          jint off, jint len, jfieldID fid){

    jint nread;

    char stackBuf[BUF_SIZE];

    char *buf = NULL;

    FD fd;

    if (IS_NULL(bytes)) {

        JNU_ThrowNullPointerException(env, NULL);

        return -1;

    }

    if (outOfBounds(env, off, len, bytes)) {

        JNU_ThrowByName(env, "java/lang/IndexOutOfBoundsException", NULL);

        return -1;

    }

    if (len == 0) {

        return 0;

    } else if (len > BUF_SIZE) {

        buf = malloc(len);

        if (buf == NULL) {

            JNU_ThrowOutOfMemoryError(env, NULL);

            return 0;

        }

    } else {

        buf = stackBuf;

    }

    fd = GET_FD(this, fid);

    if (fd == -1) {

        JNU_ThrowIOException(env, "Stream Closed");

        nread = -1;

    } else {

        nread = IO_Read(fd, buf, len);

        if (nread > 0) {

            (*env)->SetByteArrayRegion(env, bytes, off, nread, (jbyte *)buf);

        } else if (nread == -1) {

            JNU_ThrowIOExceptionWithLastError(env, "Read error");

        } else { /* EOF */

            nread = -1;

        }

    }

    if (buf != stackBuf) {

        free(buf);   

}

    return nread;

}

We see that IO_Read is actually a macro definition that eventually reads buffered data into buf through IO_Read:

#define IO_Read handleRead

The handleRead function is implemented as follows, where you can see that a read system call was made:

ssize_t

handleRead(FD fd, void *buf, jint len)

{

    ssize_t result;

    RESTARTABLE(read(fd, buf, len), result);

    return result;

}

After buf returns, the JNI function SetByteArrayRegion is copied to bytes, and its implementation is as follows (a generic macro function is defined below to represent the settings for arrays of various data types, which can be understood by replacing the Result macro with Byte):

JNI_ENTRY(void, \

jni_Set##Result##ArrayRegion(JNIEnv *env, ElementType##Array array, jsize start, \

             jsize len, const ElementType *buf)) \

  JNIWrapper("Set" XSTR(Result) "ArrayRegion"); \

  DTRACE_PROBE5(hotspot_jni, Set##Result##ArrayRegion__entry, env, array, start, len, buf);\

  DT_VOID_RETURN_MARK(Set##Result##ArrayRegion); \

  typeArrayOop dst = typeArrayOop(JNIHandles::resolve_non_null(array)); \

  if (start < 0 || len < 0 || ((unsigned int)start + (unsigned int)len > (unsigned int)dst->length())) { \

    THROW(vmSymbols::java_lang_ArrayIndexOutOfBoundsException()); \

  } else { \

    if (len > 0) { \

      int sc = TypeArrayKlass::cast(dst->klass())->log2_element_size(); \

      memcpy((u_char*) dst->Tag##_at_addr(start), \

             (u_char*) buf, \

             len << sc);    \

    } \

  } \

JNI_END

(Sources of above content departments: https://www.zhihu.com/question/65415926)

Thus, nativ method, readBytes, uses C Heap - JVM Heap for memory copy for data transfer.

readBytes reads and writes by calling handleRead.HandleRead simply reads the kernel cache data.Kernel data source file.

2. DirectByteBuffer

DirectByteBuffer is an object built into memory outside of the heap.

DirectByteBuffer is accessible at the package level and is constructed with ByteBuffer.allocateDirect(int capacity).

public static ByteBuffer allocateDirect(int capacity) {

return new DirectByteBuffer(capacity);

}

Let's look at the implementation of the DirectByteBuffer constructor

DirectByteBuffer(int cap) {// package-private

    super(-1,0, cap, cap);

    boolean pa = VM.isDirectMemoryPageAligned();

    int ps = Bits.pageSize();

    long size = Math.max(1L, (long)cap + (pa ? ps :0));

    Bits.reserveMemory(size, cap);

    long base =0;

    try {

        base =unsafe.allocateMemory(size);

    }catch (OutOfMemoryError x) {

        Bits.unreserveMemory(size, cap);

        throw x;

    }

    unsafe.setMemory(base, size, (byte)0);

    if (pa && (base % ps !=0)) {

        // Round up to page boundary

        address = base + ps - (base & (ps -1));

    }else {

        address = base;

    }

    cleaner = Cleaner.create(this,new Deallocator(base, size, cap));   

    att =null;

}

Here we focus on these places:

1.unsafe.allocateMemory(size);

The unsafe class allocates a space in out-of-heap memory (C_HEAP), a native function that goes to C/C++ code for out-of-heap memory allocation

inline char* AllocateHeap( size_t size, MEMFLAGS flags, address pc = 0, AllocFailType alloc_failmode = AllocFailStrategy::EXIT_OOM){

// ...omitted

char*p=(char*)os::malloc(size, flags, pc);

// Assign on C_HEAP and return pointer to memory area

// ...omitted

return p;

}

2.cleaner = Cleaner.create(this,new Deallocator(base, size, cap)); 

The cleaner object cleans up out-of-heap memory using DirectByteBuffer usage.DirectByteBuffer.cleaner().clean() for manual cleaning.Let's look at the clean() function

public void clean() {

//.... Omit

this.thunk.run();

//.... Omit

}

Among them, thunk is our Cleaner.create (this, new Deallocator (base, size, cap); Deallocator in.Take a look at Deallocator.

private static class Deallocator implements Runnable

{

//....ellipsis

    public void run() {

        if (address ==0) {

            // Paranoia

            return;

           }

        unsafe.freeMemory(address);

        address =0;

        Bits.unreserveMemory(size,capacity);

    }

}

You can see that it is a thread that releases out-of-heap memory.

Clear is a subclass of PhantomReference.

PhantomReference is essentially used to track when objects are recycled. It does not affect gc decisions, but if an object is found in the gc process to have no reference to it other than PhantomReference, it will be placed in the java.lang.ref.Reference.pending queue.Notify the ReferenceHandler daemon thread to perform some postprocessing when the gc is complete.In this processing method, you will determine whether it is a cleaner object and, if so, whether it is a clean() function.

So DirectByteBuffer doesn't need us to clean up memory manually.When the jvm does gc (oldgc), it cleans up dirctByteBuffer s that are not referenced.

When we keep applying for DirectByteBuffer.In fact, it takes up out-of-heap memory, while in-heap memory only takes up a reference.If the gc cannot be triggered at all times, memory outside the bored heap will not be reclaimed, resulting in a large memory footprint for the jvm process.We can limit DirecByteBuffer's usage of out-of-heap memory by -XX:MaxDirectMemorySize

3.Bits.reserveMemory(size, cap);

static void reserveMemory(long size,int cap) {

    synchronized (Bits.class) {

        if (!memoryLimitSet && VM.isBooted()) {

            maxMemory = VM.maxDirectMemory();

            memoryLimitSet =true;

        }

        // -XX:MaxDirectMemorySize limits the total capacity rather than the

        // actual memory usage, which will differ when buffers are page

        // aligned.

        if (cap <=maxMemory -totalCapacity) {

            reservedMemory += size;

            totalCapacity += cap;

            count++;

            return;

        }

    }

    System.gc();

    try {

        Thread.sleep(100);

    }catch (InterruptedException x) {

        // Restore interrupt status

        Thread.currentThread().interrupt();

    }

    synchronized (Bits.class) {

        if (totalCapacity + cap >maxMemory)

            throw new OutOfMemoryError("Direct buffer memory");

        reservedMemory += size;

        totalCapacity += cap;

        count++;

        }

}

This function is used to calculate the size taken up by DirectByteBuffer.VM.maxDirectMemory() is the maximum DirectBuffer size that the jvm allows to apply for (XX:MaxDirectMemorySize is set by this parameter)

If the current requested space is found to be larger than the restricted space, a GC is triggered once, as mentioned above, which directBuffer s were not used before will be recycled by the gc.Then apply again.

Within how the size of VM.maxDirectMemory() is set, there is such a piece of code in the VM class

public static void saveAndRemoveProperties(Properties var0) {

    //....

    String var1 = (String)var0.remove("sun.nio.MaxDirectMemorySize");

    if (var1 !=null) {

        if (var1.equals("-1")) {

            directMemory = Runtime.getRuntime().maxMemory();

        }else {

            long var2 = Long.parseLong(var1);

            if (var2 > -1L) {

            directMemory = var2;

            }

    }

    //...

}

The property "sun.nio.MaxDirectMemorySize" is set with the parameter -XX:MaxDirectMemorySize.If we don't specify this jvm parameter, the author tested it in jdk8, defaulting to -1, which limits directBufffer memory to the maximum process memory.Of course, this is also a potential risk.

Risk cases:

I used to run an application online.The application consumes data from a message queue, then stores the data in Hbase.However, if the application runs for about 2 weeks at a time, swap will become too occupied on the machine.After analysis, the JVM process consumes too much memory, but the jvm-related parameters (heap, thread size) are not set very large.Finally, it was found that the directBuffer occupied up to 10G.Later, the problem was solved by restricting directbuffer usage by -XX:MaxDirectMemorySize=2048m.Each time a directBuffer takes up 2G, a fullgc is triggered to reclaim previously unused directbuffers.HBase is a pit, I will sort out this case when I have time.

3.DirectByteBuffer File IO

Example file reading:

FileChannel filechannel=new RandomAccessFile("/data/appdatas/cat/mmm","rw").getChannel();

ByteBuffer byteBuffer = ByteBuffer.allocateDirect(SIZE);

filechannel.read(byteBuffer)

Let's look at the read function

public int read(ByteBuffer var1)throws IOException {

//. . . .

var3 = IOUtil.read(this.fd, var1, -1L,this.nd);

//. . . .

}

The primary logic calls IOUtil.read.Let's look at this function

static int read(FileDescriptor var0, ByteBuffer var1,long var2, NativeDispatcher var4)throws IOException {

    if (var1.isReadOnly()) {

        throw new IllegalArgumentException("Read-only buffer");

    }else if (var1instanceof DirectBuffer) {

        return readIntoNativeBuffer(var0, var1, var2, var4);

    }else {

        ByteBuffer var5 = Util.getTemporaryDirectBuffer(var1.remaining());

        int var7;

    try {

        int var6 = readIntoNativeBuffer(var0, var5, var2, var4);

        var5.flip();

        if (var6 >0) {

            var1.put(var5);

        }

        var7 = var6;

    }finally {

        Util.offerFirstTemporaryDirectBuffer(var5);

    }

    return var7;

    }

}

The main method is to read data into a directBuffer through the readIntoNativeBuffer function, where readIntoNativeBuffer is also a call to a native method.

From the code above, we can see that if fielchannel.read(ByteBuffer) can also pass in a HeapByteBuffer, this class is in the heap.If this is the class, when read internally, the data is read into the DirectByteBuffer and then copied into the HeapByteBuffer.Util.getTemporaryDirectBuffer(var1.remaining()); is to get a DirectBuffer object.Because DirectBuffer is expensive when it is created, it is usually managed with a pool.It's interesting to see the implementation within the Util class.

Tags: Programming jvm Java HBase

Posted on Thu, 05 Sep 2019 17:50:11 -0700 by mulysa