Understanding linux Disk Sequential Writing and Random Writing

1. Preface

* Random writing can cause the heads to change paths continuously, resulting in a significant reduction in efficiency; Sequential writing heads rarely need to change paths, or they take a very short time to change paths
This article discusses the specific differences between the two and the corresponding kernel calls


2. Environmental Preparation

assembly Edition
OS Ubuntu 16.04.4 LTS
fio 2.2.10


3. Introduction to fio

With FIO testing, which reflects the state in both read and write, we need to focus on several key indicators in fio's output report:
slat: refers to the time from I/O submission to actual I/O execution (Submission latency)
clat: refers to the time from I/O submission to I/O completion (Completion latency)
lat: refers to the total time from fio creation I/O to I/O completion
bw: throughput
iops: number of I/O s per second

4. Synchronous Writing Test

(1) Synchronized random writing

Mainly using fio as the test tool, in order to be able to see the system calls, using the strace tool, the command looks like this:

First test a random write

strace -f -tt -o /tmp/randwrite.log -D fio -name=randwrite -rw=randwrite \
-direct=1 -bs=4k -size=1G -numjobs=1  -group_reporting -filename=/tmp/test.db

Extract key information

root@wilson-ubuntu:~# strace -f -tt -o /tmp/randwrite.log -D fio -name=randwrite -rw=randwrite \
> -direct=1 -bs=4k -size=1G -numjobs=1  -group_reporting -filename=/tmp/test.db
randwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
fio-2.2.10
Starting 1 process
...
randwrite: (groupid=0, jobs=1): err= 0: pid=26882: Wed Aug 14 10:39:02 2019
  write: io=1024.0MB, bw=52526KB/s, iops=13131, runt= 19963msec
    clat (usec): min=42, max=18620, avg=56.15, stdev=164.79
     lat (usec): min=42, max=18620, avg=56.39, stdev=164.79
...
    bw (KB  /s): min=50648, max=55208, per=99.96%, avg=52506.03, stdev=1055.83
...

Run status group 0 (all jobs):
  WRITE: io=1024.0MB, aggrb=52525KB/s, minb=52525KB/s, maxb=52525KB/s, mint=19963msec, maxt=19963msec

Disk stats (read/write):
...
  sda: ios=0/262177, merge=0/25, ticks=0/7500, in_queue=7476, util=36.05%

List the information we need to focus on:
(1) clat, the average time is about 56ms
(2) lat, the average time is about 56ms
(3) bw, throughput, about 52M

Look again at the kernel call information:

root@wilson-ubuntu:~# more /tmp/randwrite.log
...
26882 10:38:41.919904 lseek(3, 665198592, SEEK_SET) = 665198592
26882 10:38:41.919920 write(3, "\220\240@\6\371\341\277>\0\200\36\31\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"..., 4096) = 4096
26882 10:38:41.919969 lseek(3, 4313088, SEEK_SET) = 4313088
26882 10:38:41.919985 write(3, "\220\240@\6\371\341\277>\0\200\36\31\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"..., 4096) = 4096
26882 10:38:41.920032 lseek(3, 455880704, SEEK_SET) = 455880704
26882 10:38:41.920048 write(3, "\220\240@\6\371\341\277>\0\200\36\31\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"..., 4096) = 4096
26882 10:38:41.920096 lseek(3, 338862080, SEEK_SET) = 338862080
26882 10:38:41.920112 write(3, "\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"..., 4096) = 4096
26882 10:38:41.920161 lseek(3, 739086336, SEEK_SET) = 739086336
26882 10:38:41.920177 write(3, "\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"..., 4096) = 4096
26882 10:38:41.920229 lseek(3, 848175104, SEEK_SET) = 848175104
26882 10:38:41.920245 write(3, "\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"..., 4096) = 4096
26882 10:38:41.920296 lseek(3, 1060147200, SEEK_SET) = 1060147200
26882 10:38:41.920312 write(3, "\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"..., 4096) = 4096
26882 10:38:41.920362 lseek(3, 863690752, SEEK_SET) = 863690752
26882 10:38:41.920377 write(3, "\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"..., 4096) = 4096
26882 10:38:41.920428 lseek(3, 279457792, SEEK_SET) = 279457792
26882 10:38:41.920444 write(3, "\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"..., 4096) = 4096
26882 10:38:41.920492 lseek(3, 271794176, SEEK_SET) = 271794176
26882 10:38:41.920508 write(3, "\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"..., 4096) = 4096
26882 10:38:41.920558 lseek(3, 1067864064, SEEK_SET) = 1067864064
26882 10:38:41.920573 write(3, "\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"..., 4096) = 4096
...

Locate the current file offset by lseek before each write is read randomly

Synchronized Sequential Writing

Use the method you just used to test sequential writing

root@wilson-ubuntu:~# strace -f -tt -o /tmp/write.log -D fio -name=write -rw=write \
-direct=1 -bs=4k -size=1G -numjobs=1  -group_reporting -filename=/tmp/test.db
write: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
fio-2.2.10
Starting 1 process
Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/70432KB/0KB /s] [0/17.7K/0 iops] [eta 00m:00s]
write: (groupid=0, jobs=1): err= 0: pid=27005: Wed Aug 14 10:53:02 2019
  write: io=1024.0MB, bw=70238KB/s, iops=17559, runt= 14929msec
    clat (usec): min=43, max=7464, avg=55.95, stdev=56.24
     lat (usec): min=43, max=7465, avg=56.15, stdev=56.25
...
    bw (KB  /s): min=67304, max=72008, per=99.98%, avg=70225.38, stdev=1266.88
...

Run status group 0 (all jobs):
  WRITE: io=1024.0MB, aggrb=70237KB/s, minb=70237KB/s, maxb=70237KB/s, mint=14929msec, maxt=14929msec

Disk stats (read/write):
...
  sda: ios=0/262162, merge=0/10, ticks=0/6948, in_queue=6932, util=46.49%

You can see:
Throughput increased to around 70M

Take another look at the kernel calls:

root@wilson-ubuntu:~# more /tmp/write.log
...
27046 10:54:28.194508 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\360\t\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.194568 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.194627 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.194687 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.194747 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.194807 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.194868 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.194928 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.194988 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.195049 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.195110 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.195197 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.195262 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.195330 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.195426 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.195497 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.195567 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.195637 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.195704 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.195757 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.195807 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.195859 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.195910 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.195961 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.196012 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.196062 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.196112 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0 \26\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.196162 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0 \26\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.196213 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0 \26\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.196265 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0 \26\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.196314 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0 \26\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.196363 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0 \26\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.196414 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0 \26\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.196472 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0 \26\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.196524 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0 \26\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
27046 10:54:28.196573 write(3, "\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0 \26\0\0\0\0\0\0\320\17\0\0\0\0\0"..., 4096) = 4096
...

Since sequential reading eliminates the need to repeatedly position file offsets, you can focus on writing

5. slat indicators

From the above tests, we did not find slat in fio's test report, because these are all synchronous operations. For synchronous I/O, slat is actually the time I/O completes because I/O submission and I/O completion are actions

Asynchronous sequential write, add -ioengine=libaio for commands written in synchronous order:

root@wilson-ubuntu:~# fio -name=write -rw=write -ioengine=libaio -direct=1 -bs=4k -size=1G -numjobs=1  -group_reporting -filename=/tmp/test.db
write: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
fio-2.2.10
Starting 1 process
Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/119.3MB/0KB /s] [0/30.6K/0 iops] [eta 00m:00s]
write: (groupid=0, jobs=1): err= 0: pid=27258: Wed Aug 14 11:14:36 2019
  write: io=1024.0MB, bw=120443KB/s, iops=30110, runt=  8706msec
    slat (usec): min=3, max=70, avg= 4.31, stdev= 1.56
    clat (usec): min=0, max=8967, avg=28.13, stdev=55.68
     lat (usec): min=22, max=8976, avg=32.53, stdev=55.72
...
    bw (KB  /s): min=118480, max=122880, per=100.00%, avg=120467.29, stdev=1525.68
...

Run status group 0 (all jobs):
  WRITE: io=1024.0MB, aggrb=120442KB/s, minb=120442KB/s, maxb=120442KB/s, mint=8706msec, maxt=8706msec

Disk stats (read/write):
...
  sda: ios=0/262147, merge=0/1, ticks=0/6576, in_queue=6568, util=74.32%

As you can see, the slat metric appears, lat is approximately equal to the sum of slat + clat (avg average); and after switching to asynchronous io, throughput has been greatly improved, around 120M

6. Summary

fio should be used as a baseline tool for disks. When you get a machine (physical or cloud), you should do a baseline test of the machine's disks at the first time, so you know what it is.
All the tests in this paper bypass caching and need to take the impact of caching into account in practice


This concludes the article
In the next few days, there are leaks of soup, you are not stingy to give advice...

Tags: Linux Ubuntu iOS

Posted on Wed, 14 Aug 2019 21:54:17 -0700 by gilbertwang