On 19.05.21 11:36, Carlos E. R. wrote:
On 19/05/2021 08.55, Josef Moellers wrote:
On 18.05.21 19:54, Carlos E. R. wrote:
On 18/05/2021 17.55, L A Walsh wrote:
Um...less of a negative effect? Well, since magnetic media isn't known to be very degraded by multiple r/w ops, it would probably generate less wear. But if you use 'dd' to copy a boot-disk-img which you want to use on a flash-drive, 'dd' could be less harmful, than formatting the flash drive, then copying all the files from some mounted source to the target media.
Provided the transfer size is bigger than the actual chunk size of the flash media. I never remember what it is? Suppose it is 16K. If you use the default size used by dd which is 512, it would write to the same chunk 16K/512 times.
Why should it? There is *one* write() request of 16K which will be turned into one WRITE command of 16K/512 blocks. So each block is written exactly once.
If you do:
dd if=/dev/zero of=/dev/sdXY count=128
it seems to write in 512 bytes chunks and is quite slower than:
dd if=/dev/zero of=/dev/sdXY bs=16K count=4
It reads/writes in 512 byte chunks because that's the default: bs=BYTES read and write up to BYTES bytes at a time (default: 512); overrides ibs and obs The performance gain may be due to various factors, the most important to me is that a bs of 512 takes 8 system calls and 8 walks through the IO stack and 16k takes only one. strace has a "-T" optionn, where you can see where the time is actually spent. What's also to consider is 1) My very first dd took quite long, after that all went quite quickly 2) Most of the write()s do not really go to the buffer cache and not directly to the device. Only when the device is close()ed, will the system wait for the data to be actually written. YMMV, obviously Josef -- SUSE Software Solutions Germany GmbH Maxfeldstr. 5 90409 Nürnberg Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer