11.04.2016 20:25, Greg Freemyer пишет:
All,
Is there a way with rsync to control the blocksize of data read / written at a time?
For local directory copies, is there a higher performing tool?
fyi: I know dd can do this faster, but in this case I'm making a copy of 261 files. rsync is in theory a perfect tool, but it is only running at 50% of theoretical max speed. I did a test in openSUSE a couple days ago and hit 100% by using a blocksize of 100 MB.
I'd like to figure out how to optimize rsync, or find a similar recursive folder copy tool that can achieve close to 100% of theoretical max speed.
== details ==
I'm starting more and more to use rsync to move large files from one USB-3 drive to another. (all files are 1.5 GB in my work today.)
Often in openSUSE, but today I'm using cygwin/windows 8.1
I think this is a more generic rsync question. (I can test rsync in openSUSE later today.)
My individual source and destination drives can hit 140 MB/sec (via USB 3).
But I'm only getting a throughput of 70 MB/sec.
Try "rsync -W" to disable delta computation; otherwise rsync may read each file twice (first to compute checksum, second to actually copy it).
I'm thinking something similar to this is happening.
while (files) { read 1.5 GB file to ram write 1.5 GB file from ram fsync() ensure 1.5 GB file is on disk } endwhile
I have no complaint about the fsync and I'm actually happy it is there. It would be great if I could control the read / write blocksize.
I suspect 100MB or less is the best performance (1 MB is probably too small, but the current size is too big.
My ultimate would be something like:
while (files) { while (data_in_file) { read user_defined_blocksize to ram from file write user_defined_blocksize from ram to file } fsync() ensure 1.5 GB file is on disk } endwhile
Greg -- Greg Freemyer www.IntelligentAvatar.net
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org