[opensuse] why is rsync speed (kB/s) 2x as dolphin copy?
Hi, I am just wondering if I have some weird settings or if that's normal behaviour: When I copy a file on my server, using drag and drop with two dolphin windows I get a transfer speed of approx. 35-40 kB/s. When I use rsync to copy the same files I get a speed of approx. 85-90 kB/s. Where does this difference come from? Daniel -- Daniel Bauer photographer Basel Barcelona professional photography: http://www.daniel-bauer.com google+: https://plus.google.com/109534388657020287386 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wednesday 23 of July 2014 10:22:06 Daniel Bauer wrote:
Hi,
Hi,
When I copy a file on my server, using drag and drop with two dolphin windows I get a transfer speed of approx. 35-40 kB/s.
When I use rsync to copy the same files I get a speed of approx. 85-90 kB/s.
Where does this difference come from?
Try running top while dolphin transfers data and tell us the top 4 processes with their percentages.
Daniel -- Regards, Peter -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Op woensdag 23 juli 2014 14:16:40 schreef auxsvr@gmail.com:
On Wednesday 23 of July 2014 10:22:06 Daniel Bauer wrote:
Hi,
Hi,
When I copy a file on my server, using drag and drop with two dolphin windows I get a transfer speed of approx. 35-40 kB/s.
When I use rsync to copy the same files I get a speed of approx. 85-90 kB/s.
Where does this difference come from?
Maybe rsync uses compression of the data? -- fr.gr. Freek de Kruijf -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Freek de Kruijf wrote:
Op woensdag 23 juli 2014 14:16:40 schreef auxsvr@gmail.com:
On Wednesday 23 of July 2014 10:22:06 Daniel Bauer wrote:
Hi,
Hi,
When I copy a file on my server, using drag and drop with two dolphin windows I get a transfer speed of approx. 35-40 kB/s.
When I use rsync to copy the same files I get a speed of approx. 85-90 kB/s.
Where does this difference come from?
Maybe rsync uses compression of the data?
On a local copy or local network, that usually slows down transfers. On might ask why rsync is so slow -- copying 800G from 1 partition to another via xfsdump/restore takes a bit under 2 hours, or about 170MB/s, but with rsync, on the same partition with rsync transfering less than 1/1000th as much (700MB), it took ~70-80 minutes... or about 163kB/s. That's on the same system (local drive -> another local drive) Transfer speeds depend on many factors. One of the largest is transfer size (how much transfered with 1 write /read. Transfer 1GB, 1-meg at a time, took 2.08s read, and 1.56s to write (using direct io). Transfer it at 4K: 37.28s, to read, and 43.02s to write. So 20-40x can be accounted for just on R/W size (1k buffers were 4x slower). Many desktop apps still think 4k is a good "read size" Over a network, causes drop from 500MB/s down to less than 200KB/s (as seen in FF and TB). Optimal i/o size on my sys is between 16M-256M. So -- to answer your question, MANY things can affect speed, but I'd look at the R/W size first. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Am 24.07.2014 06:47, schrieb Linda Walsh:
Freek de Kruijf wrote:
Op woensdag 23 juli 2014 14:16:40 schreef auxsvr@gmail.com:
On Wednesday 23 of July 2014 10:22:06 Daniel Bauer wrote:
When I copy a file on my server, using drag and drop with two dolphin windows I get a transfer speed of approx. 35-40 kB/s.
When I use rsync to copy the same files I get a speed of approx. 85-90 kB/s.
Where does this difference come from? Maybe rsync uses compression of the data?
On a local copy or local network, that usually slows down transfers.
On might ask why rsync is so slow -- copying 800G from 1 partition to another via xfsdump/restore takes a bit under 2 hours, or about 170MB/s, but with rsync, on the same partition with rsync transfering less than 1/1000th as much (700MB), it took ~70-80 minutes... or about 163kB/s.
That's on the same system (local drive -> another local drive)
Transfer speeds depend on many factors. One of the largest is transfer size (how much transfered with 1 write /read. Transfer 1GB, 1-meg at a time, took 2.08s read, and 1.56s to write (using direct io).
Transfer it at 4K: 37.28s, to read, and 43.02s to write.
So 20-40x can be accounted for just on R/W size (1k buffers were 4x slower).
Many desktop apps still think 4k is a good "read size"
Over a network, causes drop from 500MB/s down to less than 200KB/s (as seen in FF and TB).
Optimal i/o size on my sys is between 16M-256M.
So -- to answer your question, MANY things can affect speed, but I'd look at the R/W size first.
Hi, thanks for the answers the mentioned transfers are over internet with my slow but expensive telefonica-upload... So that it's slow in general is "normal", but copy with dolphin is even slower... In the test case I transferred a very large file (1 GB), with dolphin via fish: and with rsync via ssh: I don't know where to look at R/W size of dolphin, and even less how to set an optimal size (no idea about the pros and cons of changing settings)... I only use dolphin to copy single or just a few files, for mass transfer I always use rsync, which seems more reliable to me. But especially with large files and the slow connection I have, it makes quite a difference when dolphin only uses half of the available... Someone asked for the first top lines (sorry I accidentally deleted the post...), so here they are:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6135 daniel 20 0 1404m 237m 72m S 11.6 1.5 18:58.36 plasma-desktop 6124 daniel 20 0 929m 169m 75m S 9.0 1.1 303:12.27 kwin 5742 root 20 0 222m 136m 57m S 4.3 0.8 945:07.48 Xorg 21609 daniel 20 0 351m 27m 9448 S 1.3 0.2 0:02.15 kio_fish 21595 daniel 20 0 618m 58m 29m S 0.7 0.4 0:03.19 dolphin 21610 daniel 20 0 33672 2748 2140 S 0.7 0.0 0:00.48 ssh 11 root 20 0 0 0 0 S 0.3 0.0 5:13.36 rcu_preempt 5997 daniel 20 0 23844 2364 836 S 0.3 0.0 0:08.38 dbus-daemon 21425 root 20 0 0 0 0 S 0.3 0.0 0:00.28 kworker/1:1 21701 root 20 0 9924 1560 1016 R 0.3 0.0 0:00.05 top 1 root 20 0 45976 4844 2172 S 0.0 0.0 0:02.03 systemd
regards Daniel -- Daniel Bauer photographer Basel Barcelona professional photography: http://www.daniel-bauer.com google+: https://plus.google.com/109534388657020287386 -- Daniel Bauer photographer Basel Barcelona professional photography: http://www.daniel-bauer.com google+: https://plus.google.com/109534388657020287386 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thursday 24 of July 2014 09:29:13 Daniel Bauer wrote:
On Wednesday 23 of July 2014 10:22:06 Daniel Bauer wrote:
When I copy a file on my server, using drag and drop with two dolphin windows I get a transfer speed of approx. 35-40 kB/s.
When I use rsync to copy the same files I get a speed of approx. 85-90 kB/s.
Where does this difference come from?
This is a known bug, https://bugs.kde.org/show_bug.cgi?id=291835. kio_smb and kio_fish used to be very inefficient, they have gotten better recently. Probably it would be faster if you mount the share with sshfs. -- Regards, Peter -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 07/24/2014 02:50 AM, auxsvr@gmail.com wrote:
On Thursday 24 of July 2014 09:29:13 Daniel Bauer wrote:
On Wednesday 23 of July 2014 10:22:06 Daniel Bauer wrote:
When I copy a file on my server, using drag and drop with two dolphin windows I get a transfer speed of approx. 35-40 kB/s.
When I use rsync to copy the same files I get a speed of approx. 85-90 kB/s.
Where does this difference come from?
This is a known bug, https://bugs.kde.org/show_bug.cgi?id=291835. kio_smb and kio_fish used to be very inefficient, they have gotten better recently. Probably it would be faster if you mount the share with sshfs.
-> 21609 daniel 20 0 351m 27m 9448 S 1.3 0.2 0:02.15 kio_fish Also, as indicated in my private post, the sftp kio gave equivalent transfer speed (at least the k3/sftp kio). It would be worth testing sftp in dolphin to see if you get performance on par with rsync. -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Am 24.07.2014 11:19, schrieb David C. Rankin:
On 07/24/2014 02:50 AM, auxsvr@gmail.com wrote:
On Thursday 24 of July 2014 09:29:13 Daniel Bauer wrote:
On Wednesday 23 of July 2014 10:22:06 Daniel Bauer wrote: > When I copy a file on my server, using drag and drop with two > dolphin > windows I get a transfer speed of approx. 35-40 kB/s. > > When I use rsync to copy the same files I get a speed of approx. > 85-90 > kB/s. > > Where does this difference come from?
This is a known bug, https://bugs.kde.org/show_bug.cgi?id=291835. kio_smb and kio_fish used to be very inefficient, they have gotten better recently. Probably it would be faster if you mount the share with sshfs.
-> 21609 daniel 20 0 351m 27m 9448 S 1.3 0.2 0:02.15 kio_fish
Also, as indicated in my private post, the sftp kio gave equivalent transfer speed (at least the k3/sftp kio). It would be worth testing sftp in dolphin to see if you get performance on par with rsync.
Earlier I used sftp with konqueror, but then it stopped working in dolphin one day and I had to use fish. But maybe it works again, now... I'll give it a try! Daniel -- Daniel Bauer photographer Basel Barcelona professional photography: http://www.daniel-bauer.com google+: https://plus.google.com/109534388657020287386 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Am 24.07.2014 13:09, schrieb Daniel Bauer:
Am 24.07.2014 11:19, schrieb David C. Rankin:
On 07/24/2014 02:50 AM, auxsvr@gmail.com wrote:
On Thursday 24 of July 2014 09:29:13 Daniel Bauer wrote:
> On Wednesday 23 of July 2014 10:22:06 Daniel Bauer wrote: >> When I copy a file on my server, using drag and drop with two >> dolphin >> windows I get a transfer speed of approx. 35-40 kB/s. >> >> When I use rsync to copy the same files I get a speed of approx. >> 85-90 >> kB/s. >> >> Where does this difference come from?
This is a known bug, https://bugs.kde.org/show_bug.cgi?id=291835. kio_smb and kio_fish used to be very inefficient, they have gotten better recently. Probably it would be faster if you mount the share with sshfs.
-> 21609 daniel 20 0 351m 27m 9448 S 1.3 0.2 0:02.15 kio_fish
Also, as indicated in my private post, the sftp kio gave equivalent transfer speed (at least the k3/sftp kio). It would be worth testing sftp in dolphin to see if you get performance on par with rsync.
Earlier I used sftp with konqueror, but then it stopped working in dolphin one day and I had to use fish.
But maybe it works again, now... I'll give it a try!
You were right, it's the fish :-) I tried with sftp and the transfer speed is equal to rsync. One more quuestion: How can I save the password for dolphin-sftp in kwallet? With fish I could check a checkbox to save it, but with sftp there is no such checkbox in the password dialog... regards Daniel -- Daniel Bauer photographer Basel Barcelona professional photography: http://www.daniel-bauer.com google+: https://plus.google.com/109534388657020287386 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2014-07-24 a las 09:29 +0200, Daniel Bauer escribió:
Am 24.07.2014 06:47, schrieb Linda Walsh:
the mentioned transfers are over internet with my slow but expensive telefonica-upload... So that it's slow in general is "normal", but copy with dolphin is even slower...
rsync, with the other side using the rsync daemon instead of ssh might be even faster, specially when partially transmitting a file (like a file that has changed just a bit). - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlPRUcMACgkQja8UbcUWM1xr9gEAg2jO1HYbTo9VvsODYT0yhfd4 woYI3iilCXr3hUX+LO4BAJs8Hi5UA4PbwcsmxsejBj6r1mmsfD1hUqKz5LgWTYlj =OTNH -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 El 2014-07-23 a las 21:47 -0700, Linda Walsh escribió:
On might ask why rsync is so slow -- copying 800G from 1 partition to another via xfsdump/restore takes a bit under 2 hours, or about 170MB/s, but with rsync, on the same partition with rsync transfering less than 1/1000th as much (700MB), it took ~70-80 minutes... or about 163kB/s.
That's because xfsdump/restore takes shortcuts that rsync can not do. The former doesn't really do a file by file copy. Instead it works with the filesystem metadata directly. It does not need to: find file name, find location of files (sectors), do the actual file read; repeat for every file and directory. And it can only work with both target and destination being XFS. - -- Cheers Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlPRUM4ACgkQja8UbcUWM1wQhwEAh3pEwO9JsLLzhP33V3dqHx4J XE9E4bJr8dN64vPHUAgA/2DV4PCuyLhKgM/wmSomK/fAxdZwQGzwZwe9GZHvhe65 =Fa3s -----END PGP SIGNATURE-----
Carlos E. R. wrote:
El 2014-07-23 a las 21:47 -0700, Linda Walsh escribió:
On might ask why rsync is so slow -- copying 800G from 1 partition to another via xfsdump/restore takes a bit under 2 hours, or about 170MB/s, but with rsync, on the same partition with rsync transfering less than 1/1000th as much (700MB), it took ~70-80 minutes... or about 163kB/s. That's because xfsdump/restore takes shortcuts that rsync can not do. The former doesn't really do a file by file copy. Instead it works with the filesystem metadata directly. It does not need to: find file name, find location of files (sectors), do the actual file read; repeat for every file and directory.
Do you have a reference for this? Because having looked at earlier versions of xfsdump/restore, there IS no metadata section it can dump that is separate from each file.
In XFS the metadata for a file is stored with EACH file. If you are lucky, and it is short enough, it may fit in the inode (the inodes are spread out all over the disk to put inodes next to or near their data to minimize seek times). If it cannot fit in the inode, it's in a separate data fork -- that must be read separately from the inode and from the file data. The files names are spread out all over the disk in "directories"... Perhaps you are thinking of "NTFS", where the metadata is all kept in a single area? xfsdump isn't able to do anything special and I'm pretty sure it requires no root privileges unless the files being dumped are owned by root. Also any system attributes need root access to read, but if I rsync a disk I have to run as root anyway, or I get access errors. In my scenario, I setup xfs dump & restore with large buffers and put "mbuffer" in between them. If you run xfs with tiny buffers of 1K or so, you can probably get similar performance out of it. The key difference is buffer size and read/write size... no shortcuts are needed. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On July 24, 2014 11:04:15 PM EDT, Linda Walsh <suse@tlinx.org> wrote: .
In my scenario, I setup xfs dump & restore with large buffers and put "mbuffer" in between them.
mbuffer! Very cool. I love that app, but rarely find good use cases for it. For those who don't know it, mbuffer is often used between apps like tar or cpio and high-speed tape drives. Tape drives in sequential access mode are often far faster than a disk with data scattered all around the place so a large buffer highly optimizes the overall process. Greg -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 07/25/2014 01:52 PM, Greg Freemyer wrote:
mbuffer!
Very cool. I love that app, but rarely find good use cases for it. For those who don't know it, mbuffer is often used between apps like tar or cpio and high-speed tape drives. Tape drives in sequential access mode are often far faster than a disk with data scattered all around the place so a large buffer highly optimizes the overall process.
Nice, specialized tool - even network support and other useful features. ;-) Out of curiosity: for disks and tapes, is there much different from using dd iflag=fullblock bs=..M or dd iflag=fullblock ibs=..M obs=..M in between the pipes? Thanks & have a nice day, Berny -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, Jul 25, 2014 at 12:29 PM, Bernhard Voelker <mail@bernhard-voelker.de> wrote:
On 07/25/2014 01:52 PM, Greg Freemyer wrote:
mbuffer!
Very cool. I love that app, but rarely find good use cases for it.
For those who don't know it, mbuffer is often used between apps like tar or cpio and high-speed tape drives. Tape drives in sequential access mode are often far faster than a disk with data scattered all around the place so a large buffer highly optimizes the overall process.
Nice, specialized tool - even network support and other useful features. ;-)
Out of curiosity: for disks and tapes, is there much different from using dd iflag=fullblock bs=..M or dd iflag=fullblock ibs=..M obs=..M in between the pipes?
Thanks & have a nice day, Berny
For tapes yes, but it has been a decade since I did comparative testing: Let's say you have a tape that works really well if you send it 1 MB at a time, but you want to send it a continuous stream for at least 30 seconds of continuous tape operation at a time. A fast tape drive may write to media at 60 MB/sec, so that 1 MB block will barely feed the tape for anytime at all. You want to send it chucks of data 1.8GB or bigger. You can use mbuffer to output the 1MB blocks, but not to start until it has 1800 blocks in the buffer (1.8 GB of data). Now you get to feed the tape drive the 1MB blocks it likes to get and you know that you can send it a continuous stream of those blocks for at least 30 seconds. Also, any data that tar or cpio feed into mbuffer during that 30 seconds time will also get sent to the tape drive as part of the continuous stream of 1MB blocks. When the buffer gets to the low water mark, mbuffer will quit sending 1MB blocks until the buffer fills back up. I guess you could attempt the same with dd bs=1.8GB, but it really would not be the same. As an example assume your data source (tar) can produce data at 1.5 GB per 30 seconds. With mbuffer you would have: 0 - 35 seconds (or so) - mbuffer collecting data, no output 35-65 seconds - mbuffer spitting out the first 1.8 GB of data at full tape speed 65-120 seconds (or so) - mbuffer continuing to run at full tape speed pushing out the data that came from tar during the 35-65 second timeframe. 120 - 175 seconds (or so) - mbuffer still continuing to send out data, but this is data it received from 65-120 seconds That process will continue, so as you can see mbuffer will keep the drive busy for minutes at a time in that scenario, then have a 35 second pause every 5 or 6 minutes. If you did the same with dd and 1.8 GB blocks, you would have a pause at the end of every 1.8GB block because the next block would only be only 90% full and not ready to send to the tape. The end result with dd would be the tape running for 30 seconds at a time with a pause after each 30 seconds. That means more wear and tear on the tape drive and the media at a minimum and I think overall lower tape throughput. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 07/25/2014 09:05 PM, Greg Freemyer wrote:
For tapes yes, but [...]
Thanks for the insight. So the difference is that mbuffer is reading the input and writing the output at the same time (probably in threads), while dd(1) reads and writes alternately up to bs=.., right? Have a nice day, Berny -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On July 26, 2014 9:36:26 AM EDT, Bernhard Voelker <mail@bernhard-voelker.de> wrote:
On 07/25/2014 09:05 PM, Greg Freemyer wrote:
For tapes yes, but [...]
Thanks for the insight. So the difference is that mbuffer is reading the input and writing the output at the same time (probably in threads), while dd(1) reads and writes alternately up to bs=.., right?
Have a nice day, Berny
It is more that dd does not have the concept of a high and low water mark. If you set the blocksize to 1.8GB, then dd treats it as all or nothing. So it is not good at smoothing out a dataflow. Buffer lets you have a 1MB block size, but a 1.8 GB high watermark before it starts sending. While sending both dd and buffer will continue reading from the source stream. It is what happens as after the 1.8 GB is sent that is different. dd will say: If internal buffer > 1.8 GB then send it, else wait for 1.8 GB to accumulate. mbuffer will say: If internal buffer > low water mark, keep sending until below low water mark. At least with a tape drive mbuffer's behavior will result in less start and stops which means less wear and tear on the drive and media. Greg -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer wrote:
Linda Walsh wrote:
In my scenario, I setup xfs dump & restore with large buffers and put "mbuffer" in between them.
mbuffer!
Very cool. I love that app, but rarely find good use cases for it. For those who don't know it, mbuffer is often used between apps like tar or cpio and high-speed tape drives. Tape drives in sequential access mode are often far faster than a disk with data scattered all around the place so a large buffer highly optimizes the overall process.
---- Collecting "files" from wherever on an "old disk" and then copying them to a new, pristine disk, has similar performance characteristics. To *try* to make sure the writes empty faster than the reads so I don't get any slowdown from "mbuffer" being "full", I adjust priorities to make writers have higher priority than the reader. Script I use to dump a xfs-drive to a target included as a sample:
cat xfscopy #!/bin/bash -ue # $1=source # $2=target
# xfsdump ops: # -b = blocksize # -l = level (0=all) # -J = inhibit inventory update # -p = progress report every # seconds # next to last arg is '-' for stdout/in & out # last arg is for source or destination mount points if (($UID)) ; then echo "Must be run as root"; exit 1; fi mbuffer_size=1024M xfs_bs=128k xfs_report_interval=300 # setting restore proc's cpu+disk io "higher" than "dump"s helps # prevent filling memory and thrashing # prios c:1=real(don't use), 2=best-effort(timeshare); 3=idle # in Best effort, -n=0-7 where 0=highest, 7=lowest, but not strict! dump_cprio=-19 restore_cprio=-5 dump_dprio="-c3" restore_dprio="-c 2 -n3" alias nice=$(type -P /usr/bin/nice) alias ionice=$(type -P /usr/bin/ionice) nice $dump_cprio ionice $dump_dprio \ xfsdump -b $xfs_bs -l 0 -p $xfs_report_interval -J - "$1" | \ sudo nice -1 mbuffer -m $mbuffer_size -L | \ nice $restore_cprio ionice $restore_dprio \ xfsrestore -b $xfs_bs -B -F -J - "$2" -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (8)
-
auxsvr@gmail.com
-
Bernhard Voelker
-
Carlos E. R.
-
Daniel Bauer
-
David C. Rankin
-
Freek de Kruijf
-
Greg Freemyer
-
Linda Walsh