On 2018-05-23 02:35, L A Walsh wrote:
Carlos E. R. wrote:
On 2018-05-21 22:27, Linda Walsh wrote:
Carlos E. R. wrote:
Try playing with options such as "oflag=direct",
I'll second this part, but seriously, 4k at a time?? ---- Do you have to write such small amounts?
dd if=/dev/zero of=foo bs=4k count=1K oflag=direct 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0689802 s, 60.8 MB/s <<4k blocksize
dd if=/dev/zero of=foo bs=4M count=1K oflag=direct 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 5.24457 s, 819 MB/s <<4M blocksize
dd if=/dev/zero of=foo bs=8M count=512 oflag=direct 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 5.04259 s, 852 MB/ <<8M dd if=/dev/zero of=foo bs=16M count=256 oflag=direct 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 4.90653 s, 875 MB/ <<16M
16M is the sweet stop on my system. Yours may vary.
Well, with a small block and direct writing to disk, the kernel cache is disabled and speed suffers. Increasing the size of the write block acts like having a cache, but in the application instead than by the kernel.
Not exactly. Increasing the write size decreases *overhead* just like sending packets through the network. If you send 1 packet of 1.5kB and wait for it to be transmitted & received by the other end, you will get very slow performance due to the overhead of sending each packet. Vs. if you have 1 write and only need an acknowledgment of the whole thing having been received, you only need to wait for 1 reply. Whether you are writing to disk or to a network, the overhead of handling each packet reduces throughput.
While this is true, you forget the impact of "oflag=direct". Without that flag, you see a much smaller difference between writing 1KB or 1MB chunks. Yes, I know that writing small chunks has an impact on performance.
It depends on how fast the user's application generates data. It generates video in real time and can't be paused. If it only needs 2.8MB/s, any of these methods would work, but if it needed 100 times that, then writing 4k blocks makes no sense and wouldn't work even with oflag=nocache. Nocache tells the OS that it can throw away the data -- it doesn't force it to be thrown away. In writing a 428GB file (then my disk filled), all of memory was filled long before it filled the disk and overall, only averaged 145MB/s.
Correct. Anyway, the problem was an error in the code, the problem was different and solved. -- Cheers / Saludos, Carlos E. R. (from 42.3 x86_64 "Malachite" at Telcontar)