On Tue, Mar 26, 2013 at 11:18 AM, Per Jessen <per@computer.org> wrote:
Greg Freemyer wrote:
On Tue, Mar 26, 2013 at 9:53 AM, Martin Helm <martin@null-a.de> wrote:
I doubt a little bit that the user space part really affects the performance, at least I cannot see any valid reason from a programmers point of view.
I don't know if you do any kernel level programming, but the kernel / userspace boundary is common bottleneck concern for kernel devs.
There was a paper published in the Communications of the ACM:
http://dl.acm.org/citation.cfm?id=1774130 http://www.csl.sri.com/users/gehani/papers/SAC-2010.FUSE.pdf
Per, I really appreciate the link to that PDF. Figure 3B of the pdf is pretty clear fuse "can" be a bottleneck. In their test setup, a native filesystem with 50-100MB files writes at about 850MB/sec. A fuse filesystem writes at about 350 MB/sec. But even 350 MB/sec is way above what Basil is seeing, so the bottleneck should not be in fuse directly. That leaves Martin's theory that it is ntfs-3g itself which is not optimally written. fyi: per the paper, they used a standard single disk as their storage media so any speeds above 115 MB/sec are because of disk caches. That's why the relatively small 50-100MB files achieve such apparently incredible write speeds. With bigger files or with cache clear on read, they are seeing much less difference between native and fuse. Thus basically for real world situations the disk is the bottleneck, not fuse. On the other hand, for a high performance disk setup that might achieve 500 MB/sec or above, fuse would become a major bottleneck. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org