On May 26, 2015 2:41:43 PM EDT, David T-G
Carlos & jdd, et al --
...and then Carlos E. R. said... % % On 2015-05-26 15:42, jdd wrote: % > Le 26/05/2015 15:28, Carlos E. R. a écrit : % > % > I mean some block size option, some time ago (don't know for now), the % > default was 512 bytes and copying 1Tb would need several weeks :-). % % Ah, of course. Use a block of anything from 1 to 100 MB. "bs=100M % oflag=nocache" or something of the sort.
Agreed. I routinely make complete copies of my 250G 7200rpm SATA drives via a dumb USB2 controller, and with
dd if=/dev/sdX bs=64M of=local.big.sdX.file
it takes me maybe three hours to write it to a RAIDed scratch vol. I haven't tried significantly different block sizes, and this is on a RAM-skinny machine (more than enough for Linux, of course) so I couldn't go to something like 512M anyway. Then I swap in the restore test drive and reverse the process and write it out in about five hours, after which I go and boot from it to make sure that it actually works :-)
Actually, before I do any of this I mount the filesystem and then write 0s to the blank space
for F in ceiling ( $FREESPACE / 32G ) do gzip -dc prepared.dev-zero.32G-bigfile.gz >/vol/tmp/BIGFILE.$F done rm /vol/tmp/BIGFILE.?
Nifty trick but I suspect this is a better way to actually create the big zero filled files: dd if=/dev/zero of=/vol/tmp/BIGFILE.$F bs=10MB count=3200 Why are you preparing a dev-zero file in advance? Greg Greg -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org