On Thu, Mar 28, 2013 at 10:41 AM, Carlos E. R. <robin.listas@telefonica.net> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Thursday, 2013-03-28 at 16:05 +1100, Basil Chupin wrote:
On 28/03/13 15:50, Carlos E. R. wrote:
OK, OK, OK already! I'll do it! :-)
I'll wait impatiently :-)
I just tried it on my laptop - USB3 - ntfs I got 25-30 MB/sec and mount.ntfs had about a 75% cpu load on one core. (total cpu load went up about only about 15% with a quad core., so 0.15 * 8 = about 120% of one core total cpu load increase.) I used dd if=/dev/zero of=test bs=1M count=20000 to create a 20 GB test file. The same on my laptop's internal drive was about 75 MB/sec with roughly a 20% total cpu load (quad core, so that is about 160% of one core's load.) So the interesting thing is introduction of mount.ntfs actually lowered the total cpu load of running dd. I assume because it introduced delays that reduced the data movement so much. To repeat: with dd -> ext4 75 MB/sec data transfer and 160% cpu load (uses multiple cores. No single core overloaded.) with dd -> ntfs 25-30 MB/sec data transfer and only 120% cpu load (one core running mount.ntfs at 75% load) I actually find that kind of strange, but it was repeatable. I think it is pretty clear that mount.ntfs is the bottleneck. The open (and unimportant) question if it is just poorly written or if the fact it is based on fuse makes it inefficient no matter what. fyi: this is a quad core laptop, but I was running a 4 worker processor intensive app at the same time, so I had a 50% untilization of my cpus as a base load. The 15% and 20% overall loads I mention brought the base load up to 65% and 70% respectively. Since they were just getting the hyper-threading cpu bounce, the loads may results may of been different without the base load running in the background. I don't actually know how that works. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org