[opensuse] USB3 throughput
All, I happened to run a USB3 throughput test recently and it wrote 5 TB in 7 hours. I used a USB3 hub to let me hook up 5 1 TB drives and wrote to them simultaneously. That's about 200 MB / sec. It's the first time I've ever apparently saturated USB3. Does anyone know if 200MB / sec is a good max USB3 throughput? fyi: Per the spec, it should be possible to hit 500MB/sec., but real world seldom hits the spec limits so 200MB/sec may be as good as it gets. If not, I may need to look for a better USB3 hub. (I do a lot of high bandwidth transfers in my job.) Greg -- Greg Freemyer -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 7/2/2014 8:59 AM, Greg Freemyer wrote:
All,
I happened to run a USB3 throughput test recently and it wrote 5 TB in 7 hours. I used a USB3 hub to let me hook up 5 1 TB drives and wrote to them simultaneously.
That's about 200 MB / sec.
It's the first time I've ever apparently saturated USB3. Does anyone know if 200MB / sec is a good max USB3 throughput?
fyi: Per the spec, it should be possible to hit 500MB/sec., but real world seldom hits the spec limits so 200MB/sec may be as good as it gets. If not, I may need to look for a better USB3 hub. (I do a lot of high bandwidth transfers in my job.)
Greg -- Greg Freemyer
Using what method did you write to 5 drives simultaneously? Best I know, is that you can send data to ONE usb device at a time based on the addressing scheme, and sending to 5 required 5 send operations each with a different address. Multicasting exists for control operations (everybody wake up) but not transfers. Also, there are many different data transfer modes, and if your software supported synchronous mode you get much better speed than with polled (quick removal) mode. -- _____________________________________ ---This space for rent--- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, Jul 2, 2014 at 12:42 PM, John Andersen <jsamyth@gmail.com> wrote:
On 7/2/2014 8:59 AM, Greg Freemyer wrote:
All,
I happened to run a USB3 throughput test recently and it wrote 5 TB in 7 hours. I used a USB3 hub to let me hook up 5 1 TB drives and wrote to them simultaneously.
That's about 200 MB / sec.
It's the first time I've ever apparently saturated USB3. Does anyone know if 200MB / sec is a good max USB3 throughput?
fyi: Per the spec, it should be possible to hit 500MB/sec., but real world seldom hits the spec limits so 200MB/sec may be as good as it gets. If not, I may need to look for a better USB3 hub. (I do a lot of high bandwidth transfers in my job.)
Greg -- Greg Freemyer
Using what method did you write to 5 drives simultaneously?
Something similar to "dd if=/dev/zero of=/dev/sdb" for sdb, sdc, sdd, sde, and sdf individually. I actually used "dc3dd wipe=/dev/sdb" etc. It is more efficient than reading from /dev/zero. It was not a broadcast mechanism.
Best I know, is that you can send data to ONE usb device at a time based on the addressing scheme, and sending to 5 required 5 send operations each with a different address.
I did not mean the same level of simultaneous that you are. I meant the data packets were interleaved. Think: dc3dd wipe=/dev/sdb & dc3dd wipe=/dev/sdc & dc3dd wipe=/dev/sdd & dc3dd wipe=/dev/sde & dc3dd wipe=/dev/sdf I actually used 5 consoles and ran one each of the above in each console.
Multicasting exists for control operations (everybody wake up) but not transfers.
Also, there are many different data transfer modes, and if your software supported synchronous mode you get much better speed than with polled (quick removal) mode.
Does the linux block device stack have a way to support that? I use only a handful of tools typically to bang on these drives, so I could conceivably submit patches to the user space apps to support that if it's a logical thing to do. fyi: I'm hashing (similar to md5sum) 500 GB of data on one drive currently. I'm getting 108 MB across USB3 from a single USB3 drive. I'm very happy with that speed. But when I first copied the data between 2 drives I was only getting about 60 MB /sec throughput (120 MB combined reads and writes). It would be great if I could get a faster transfer than 60 MB/sec. Greg
-- _____________________________________ ---This space for rent--- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 07/02/2014 12:58 PM, Greg Freemyer wrote:
On Wed, Jul 2, 2014 at 12:42 PM, John Andersen <jsamyth@gmail.com> wrote:
On 7/2/2014 8:59 AM, Greg Freemyer wrote:
All,
I happened to run a USB3 throughput test recently and it wrote 5 TB in 7 hours. I used a USB3 hub to let me hook up 5 1 TB drives and wrote to them simultaneously.
That's about 200 MB / sec.
It's the first time I've ever apparently saturated USB3. Does anyone know if 200MB / sec is a good max USB3 throughput?
fyi: Per the spec, it should be possible to hit 500MB/sec., but real world seldom hits the spec limits so 200MB/sec may be as good as it gets. If not, I may need to look for a better USB3 hub. (I do a lot of high bandwidth transfers in my job.)
Greg -- Greg Freemyer
Using what method did you write to 5 drives simultaneously?
Something similar to "dd if=/dev/zero of=/dev/sdb" for sdb, sdc, sdd, sde, and sdf individually.
I actually used "dc3dd wipe=/dev/sdb" etc. It is more efficient than reading from /dev/zero.
It was not a broadcast mechanism.
Best I know, is that you can send data to ONE usb device at a time based on the addressing scheme, and sending to 5 required 5 send operations each with a different address.
I did not mean the same level of simultaneous that you are. I meant the data packets were interleaved.
Which is about the worst case you can think of.
Think:
dc3dd wipe=/dev/sdb & dc3dd wipe=/dev/sdc & dc3dd wipe=/dev/sdd & dc3dd wipe=/dev/sde & dc3dd wipe=/dev/sdf
I actually used 5 consoles and ran one each of the above in each console.
So you get a small burst of transfer then switch to the next.. See my other posting about this.
fyi: I'm hashing (similar to md5sum) 500 GB of data on one drive currently. I'm getting 108 MB across USB3 from a single USB3 drive. I'm very happy with that speed.
Depending on how you do that. Theres a differnce between dd bs=512b and dd bs=10K
But when I first copied the data between 2 drives I was only getting about 60 MB /sec throughput (120 MB combined reads and writes).
Suppose you use the 'copy' program from software bytes using the Fread() type buffering. You have a 512 byte read buffer and 512 byte write buffer. The core of you code does putchar(getchar()) that is, one at a time. And don't forget those 512 byte buffers. Then there's using 'dd' with big buffers between files. Oh, right, files, which means the file system overhead, and quite possibly the allocation of new file segments and putting those refernces in the file map. Some file systems are faster than others. See http://www.phoronix.com/scan.php?page=article&item=linux_311_filesystems as an example of different file systems under different loads and conditions. You may want to look specifically at http://www.phoronix.com/scan.php?page=article&item=usb20_usb30_flash&num=1 and http://www.phoronix.com/scan.php?page=article&item=usb20_usb30_flash&num=1 You might also look at http://www.phoronix.com/scan.php?page=article&item=linux_iosched_2012&num=1 There are a lot of things you can tune under /proc/
It would be great if I could get a faster transfer than 60 MB/sec.
Compared to the phoronix to flash drive you seem to be doing well. -- /"\ \ / ASCII Ribbon Campaign X Against HTML Mail / \ -- "The capacity to learn is a gift; The ability to learn is a skill; The willingness to learn is a choice." -- Brain Herbert, -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On July 2, 2014 2:01:08 PM EDT, Anton Aylward <opensuse@antonaylward.com> wrote:
On 07/02/2014 12:58 PM, Greg Freemyer wrote:
On Wed, Jul 2, 2014 at 12:42 PM, John Andersen <jsamyth@gmail.com> wrote:
On 7/2/2014 8:59 AM, Greg Freemyer wrote:
All,
I happened to run a USB3 throughput test recently and it wrote 5 TB in 7 hours. I used a USB3 hub to let me hook up 5 1 TB drives and wrote to them simultaneously.
That's about 200 MB / sec.
It's the first time I've ever apparently saturated USB3. Does anyone know if 200MB / sec is a good max USB3 throughput?
fyi: Per the spec, it should be possible to hit 500MB/sec., but real world seldom hits the spec limits so 200MB/sec may be as good as it gets. If not, I may need to look for a better USB3 hub. (I do a lot of high bandwidth transfers in my job.)
Greg -- Greg Freemyer
Using what method did you write to 5 drives simultaneously?
Something similar to "dd if=/dev/zero of=/dev/sdb" for sdb, sdc, sdd, sde, and sdf individually.
I actually used "dc3dd wipe=/dev/sdb" etc. It is more efficient than reading from /dev/zero.
It was not a broadcast mechanism.
Best I know, is that you can send data to ONE usb device at a time based on the addressing scheme, and sending to 5 required 5 send operations each with a different address.
I did not mean the same level of simultaneous that you are. I meant the data packets were interleaved.
Which is about the worst case you can think of.
Rotating rust can typically only do 100 to 120 MB/sec. with a single spindle. USB3 has a theoretical maximum of 500 MB/sec. Adding concurrent wipes added to the overall throughput from what I could tell. My questions are: - is 200MB/sec a reasonable absolute max. Ie. With 5 drives, each drive was only getting 40MB/sec. Not all that impressive. But with only 2 concurrent wipes, the throughput was lower. - for data transfer from one drive to another, is 60 MB/sec the best I can expect to do? It would be free to get that up to 100 MB/sec. Greg -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 07/02/2014 11:14 PM, Greg Freemyer wrote:
On July 2, 2014 2:01:08 PM EDT, Anton Aylward
Who subscribes to the list so there is no point in cc'ing him on your replies....
<opensuse@antonaylward.com> wrote:
On 07/02/2014 12:58 PM, Greg Freemyer wrote:
Best I know, is that you can send data to ONE usb device at a time based on the addressing scheme, and sending to 5 required 5 send operations each with a different address.
I did not mean the same level of simultaneous that you are. I meant the data packets were interleaved.
Which is about the worst case you can think of.
Rotating rust can typically only do 100 to 120 MB/sec. with a single spindle.
That's data transfer. Don't forget 'seek'. On a file system you are also moving the head from where the structural data is to where the actual data is. That was a killer on the old V7 FS and one reason the Berkeley FastFileSystem of 1982/82 was such an improvement! We've come a long way since then but we still have separate structural and actual data. The B-tree based file systems such as reiser, xfs, btrfs and to some extent ext4 try to get around this by making the structural part and the data part go into the same btree structures. But that just adds another layer ...
USB3 has a theoretical maximum of 500 MB/sec.
The top limit is beside the point. its the interleaving of operations that is killing it. That and buffer size. The theoretical maximum vendors quote is dedicated, streaming. * no switching back and forth between source and destination devices * no file system overhead in the kernel mapping * no reading and writing of structural data * no writing of journal data * no writing only single file system blocks at a time * no copy down by the OS from the FS read buffer to application input buffer * no copying from application input buffer to application output buffer * no copy up by the OS from application output buffer to the FS write buffer -- /"\ \ / ASCII Ribbon Campaign X Against HTML Mail / \ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On July 2, 2014 2:01:08 PM EDT, Anton Aylward <opensuse@antonaylward.com> wrote:
On 07/02/2014 12:58 PM, Greg Freemyer wrote: <snip>
But when I first copied the data between 2 drives I was only getting about 60 MB /sec throughput (120 MB combined reads and writes).
Suppose you use the 'copy' program from software bytes using the Fread() type buffering. You have a 512 byte read buffer and 512 byte write buffer. The core of you code does putchar(getchar()) that is, one at a time. And don't forget those 512 byte buffers.
Then there's using 'dd' with big buffers between files. Oh, right, files, which means the file system overhead, and quite possibly the allocation of new file segments and putting those refernces in the file map.
For my main use case I'm using ewfacquire as the user space app. It can compress the data stream, but I find the optimum clock time for the whole copy is with no compression. Ewfacquire works more like dd with tunable block sizes. I was testing with 32 KB blocks as I recall (I use the default transfer size). I will try even bigger blocks, but my testing with sata drives and dd showed 4KB blocks were only slightly slower than 1MB blocks.
Some file systems are faster than others.
I hate to admit that for the data transfer scenario I'm writing to ntfs formatted drives, so ntfs-3g is getting a major workout. I will test with ext4, btrfs and xfs just to see if the filesystem is a major bottleneck.
See http://www.phoronix.com/scan.php?page=article&item=linux_311_filesystems
as an example of different file systems under different loads and conditions.
You may want to look specifically at
http://www.phoronix.com/scan.php?page=article&item=usb20_usb30_flash&num=1
and
http://www.phoronix.com/scan.php?page=article&item=usb20_usb30_flash&num=1
You might also look at
http://www.phoronix.com/scan.php?page=article&item=linux_iosched_2012&num=1
There are a lot of things you can tune under /proc/
It would be great if I could get a faster transfer than 60 MB/sec.
Compared to the phoronix to flash drive you seem to be doing well.
No time to review tonight, but I will. I had to ewfacquire about 50 drives a couple weeks ago, and the throughput really was a bottleneck in me getting through the project. Trouble is I had to ship the data (forensic images) to my client and ntfs or fat32 are the only 2 realistic filesystems for the delivery. Thanks Greg -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 07/02/2014 11:45 PM, Greg Freemyer wrote:
On July 2, 2014 2:01:08 PM EDT, Anton Aylward <opensuse@antonaylward.com> wrote:
On 07/02/2014 12:58 PM, Greg Freemyer wrote: <snip>
But when I first copied the data between 2 drives I was only getting about 60 MB /sec throughput (120 MB combined reads and writes).
Suppose you use the 'copy' program from software bytes using the Fread() type buffering. You have a 512 byte read buffer and 512 byte write buffer. The core of you code does putchar(getchar()) that is, one at a time. And don't forget those 512 byte buffers.
Then there's using 'dd' with big buffers between files. Oh, right, files, which means the file system overhead, and quite possibly the allocation of new file segments and putting those refernces in the file map.
For my main use case I'm using ewfacquire as the user space app. It can compress the data stream, but I find the optimum clock time for the whole copy is with no compression.
So, compression is slow. That may be the algorithm or your machine. Metricating that is another issue!
Ewfacquire works more like dd with tunable block sizes. I was testing with 32 KB blocks as I recall (I use the default transfer size).
You should metricate with that as well when you get the FS issue sorted out.
I will try even bigger blocks, but my testing with sata drives and dd showed 4KB blocks were only slightly slower than 1MB blocks.
If your file system is 4K based and you are not writing to some preallocated file then I'd expect to see something like that.
Some file systems are faster than others.
I hate to admit that for the data transfer scenario I'm writing to ntfs formatted drives, so ntfs-3g is getting a major workout.
ROTFLMAO!
I will test with ext4, btrfs and xfs just to see if the filesystem is a major bottleneck.
All those are tunable as to the underlying block size. They also have journals and journal block size, which can affect performance. Some people have got good results putting the journal for a slower rotating drive on a small SSD, back in the days when SSDs were small :-) Still, it says something about journalling. Oh, and there is also hashing algorithms to consider. Sometimes my btrfs freezes. Well actually the table algorithm does a rebuild and load average goes up to around then, though as high as 15-18 isn't uncommon and I saw 27 once. At least that's what I think is happening form the unresponsive 'ps' and 'top' and 'iotop'.
I had to ewfacquire about 50 drives a couple weeks ago, and the throughput really was a bottleneck in me getting through the project.
Trouble is I had to ship the data (forensic images) to my client and ntfs or fat32 are the only 2 realistic filesystems for the delivery.
Looks like you have a set of handcuffs there. Sideline: I bought a 64G microSD for my tablet and a full size SD carrier so I could plug it into my desktop to 'copy books and music. Only my desktop refused to see it. It turns out large cards use extFAT. Linux doesn't have a driver for that, but I found a FUSE one. You might consider if your clients can use something like that. -- /"\ \ / ASCII Ribbon Campaign X Against HTML Mail / \ -- A lot of managers talk about 'thinking out of the box,' but they don't understand the communication process by which that happens. You do not think out of the box by commanding the box! You think out of the box precisely by bringing ideas together that don't allow dominant ideas to continue to dominate. -- Stan Deetz -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 07/02/2014 12:42 PM, John Andersen wrote:
Multicasting exists for control operations (everybody wake up) but not transfers.
HA! So, little has changed since the PDP-11 days. The disk controllers then could manage 8 'spindles', that is do seeks on all 8 simultaneously ready for a transfer, but only had one transfer channel. You could, and I did, write a striping controller that put a 4K logical file block on each of 8 512 byte sectors but the reality was that it was slower than streaming a 4K clock onto a single platter. -- /"\ \ / ASCII Ribbon Campaign X Against HTML Mail / \ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer wrote:
All,
I happened to run a USB3 throughput test recently and it wrote 5 TB in 7 hours. I used a USB3 hub to let me hook up 5 1 TB drives and wrote to them simultaneously.
That's about 200 MB / sec.
It's the first time I've ever apparently saturated USB3. Does anyone know if 200MB / sec is a good max USB3 throughput?
fyi: Per the spec, it should be possible to hit 500MB/sec., but real world seldom hits the spec limits so 200MB/sec may be as good as it gets. If not, I may need to look for a better USB3 hub. (I do a lot of high bandwidth transfers in my job.)
Max rate is basically how fast the serial lines can push/receive data between the wire and the host or device. You might be able to sustain maximum speeds when using something like a static memory device, but probably not even for flash memory (in either a stick of SSD device) because flashing new data is comparatively slow. And spinning platters with read/write heads on seeker arms have no chance of keeping up that kind of speed.
Greg -- Greg Freemyer
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Jul 3, 2014 at 12:51 AM, Dirk Gently <dirk.gently00@gmail.com> wrote:
Greg Freemyer wrote:
All,
I happened to run a USB3 throughput test recently and it wrote 5 TB in 7 hours. I used a USB3 hub to let me hook up 5 1 TB drives and wrote to them simultaneously.
That's about 200 MB / sec.
It's the first time I've ever apparently saturated USB3. Does anyone know if 200MB / sec is a good max USB3 throughput?
fyi: Per the spec, it should be possible to hit 500MB/sec., but real world seldom hits the spec limits so 200MB/sec may be as good as it gets. If not, I may need to look for a better USB3 hub. (I do a lot of high bandwidth transfers in my job.)
Max rate is basically how fast the serial lines can push/receive data between the wire and the host or device.
You might be able to sustain maximum speeds when using something like a static memory device, but probably not even for flash memory (in either a stick of SSD device) because flashing new data is comparatively slow.
And spinning platters with read/write heads on seeker arms have no chance of keeping up that kind of speed.
The above reply assumes the bottleneck is the disk drive. I agree that is likely the case when a single disk is in use via USB-3. My statement is that when I go from one disk (108 MB / sec) to 2 disks (120 MB/sec) I am not seeing a linear increase in bandwidth or even close to it. Further, when I go to 5 disks, I seem to max out the USB-3 connection at 200MB/sec. That is disappointingly low to me, but it may be as good as it gets. If I could get faster throughput by getting a better USB-3 hub, I'd like to know. As I'm working with a laptop Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 7/3/2014 1:57 PM, Greg Freemyer wrote:
On Thu, Jul 3, 2014 at 12:51 AM, Dirk Gently <dirk.gently00@gmail.com> wrote:
Greg Freemyer wrote:
All,
I happened to run a USB3 throughput test recently and it wrote 5 TB in 7 hours. I used a USB3 hub to let me hook up 5 1 TB drives and wrote to them simultaneously.
That's about 200 MB / sec.
It's the first time I've ever apparently saturated USB3. Does anyone know if 200MB / sec is a good max USB3 throughput?
fyi: Per the spec, it should be possible to hit 500MB/sec., but real world seldom hits the spec limits so 200MB/sec may be as good as it gets. If not, I may need to look for a better USB3 hub. (I do a lot of high bandwidth transfers in my job.)
Max rate is basically how fast the serial lines can push/receive data between the wire and the host or device.
You might be able to sustain maximum speeds when using something like a static memory device, but probably not even for flash memory (in either a stick of SSD device) because flashing new data is comparatively slow.
And spinning platters with read/write heads on seeker arms have no chance of keeping up that kind of speed.
The above reply assumes the bottleneck is the disk drive. I agree that is likely the case when a single disk is in use via USB-3. My statement is that when I go from one disk (108 MB / sec) to 2 disks (120 MB/sec) I am not seeing a linear increase in bandwidth or even close to it.
Further, when I go to 5 disks, I seem to max out the USB-3 connection at 200MB/sec. That is disappointingly low to me, but it may be as good as it gets.
If I could get faster throughput by getting a better USB-3 hub, I'd like to know. As I'm working with a laptop Greg
But as explained previously, unless you are doing some magical "send once + receive on all 5" (which you said you were not doing), you are really only dealing with one disk at a time on either end of the wire. One receiving disk maxes out at 108 Two receiving disks: Second drive takes advantage of first being maxed (not read to receive) and it (#2) receives while #1 is busy. They take turns By the time you have 5 targets, you've probably maxed out your sending disk and the wire is waiting for data. -- _____________________________________ ---This space for rent--- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Jul 3, 2014 at 5:25 PM, John Andersen <jsamyth@gmail.com> wrote:
On 7/3/2014 1:57 PM, Greg Freemyer wrote:
On Thu, Jul 3, 2014 at 12:51 AM, Dirk Gently <dirk.gently00@gmail.com> wrote:
Greg Freemyer wrote:
All,
I happened to run a USB3 throughput test recently and it wrote 5 TB in 7 hours. I used a USB3 hub to let me hook up 5 1 TB drives and wrote to them simultaneously.
That's about 200 MB / sec.
It's the first time I've ever apparently saturated USB3. Does anyone know if 200MB / sec is a good max USB3 throughput?
fyi: Per the spec, it should be possible to hit 500MB/sec., but real world seldom hits the spec limits so 200MB/sec may be as good as it gets. If not, I may need to look for a better USB3 hub. (I do a lot of high bandwidth transfers in my job.)
Max rate is basically how fast the serial lines can push/receive data between the wire and the host or device.
You might be able to sustain maximum speeds when using something like a static memory device, but probably not even for flash memory (in either a stick of SSD device) because flashing new data is comparatively slow.
And spinning platters with read/write heads on seeker arms have no chance of keeping up that kind of speed.
The above reply assumes the bottleneck is the disk drive. I agree that is likely the case when a single disk is in use via USB-3. My statement is that when I go from one disk (108 MB / sec) to 2 disks (120 MB/sec) I am not seeing a linear increase in bandwidth or even close to it.
Further, when I go to 5 disks, I seem to max out the USB-3 connection at 200MB/sec. That is disappointingly low to me, but it may be as good as it gets.
If I could get faster throughput by getting a better USB-3 hub, I'd like to know. As I'm working with a laptop Greg
But as explained previously, unless you are doing some magical "send once + receive on all 5" (which you said you were not doing), you are really only dealing with one disk at a time on either end of the wire.
The disks have write-caches that can accept data at the full USB spec speed (or so I assume). That means if I am sending data to a USB-3 drive at 100MB/sec, then the USB-3 bus is only 20% of saturation. A typical disk cache holds multiple tracks of data, so for a fully utilized disk i/o pattern the disk spends most of it's time telling the host that it's cache is full and it can't accept any more data. Further, disks internal caches tend to be structured to align with tracks, so if wiping a disk as I described with dc3dd wipe=/dev/sdb the disk will initially accept data at full speed, but then the cache will fill up. When the relatively slow spinning platter makes a full rotation (takes about 200 usecs per rotation for a 5200 rpm drive) and written out a track of cache, it then frees up that cache line and lets the host send another chunk of data. But then it immediately says: my cache is full, hold off on sending any more. After another full rotation of the platter, it frees up another track of cache and lets the host send another chunk of data. I don't know the details of how read-ahead caching works with respect to the low level disk reads, but I see 110 MB/sec of read speed using "dd if=/dev/sdb of=/dev/null" I suspect the drive typically reads out a full track of data to the cache, then just returns the requested data out of that to the host. That would make a lot of sense and be very easy to implement.
One receiving disk maxes out at 108 Two receiving disks: Second drive takes advantage of first being maxed (not ready to receive) and it (#2) receives while #1 is busy. They take turns
Agreed. but why don't I get better USB-3 bus utilization. A single drive does gets about 20% bus utilization. Why don't I get 40% with 2 drives?
By the time you have 5 targets, you've probably maxed out your sending disk and the wire is waiting for data.
I'm not sure what you are saying. With "dc3dd wipe=/dev/sdb" all of the zeros come straight from the CPU. It can generate way more than 200MB/sec of zeros. The bottleneck is not the data source. It has to be the USB-3 bus. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 07/03/2014 05:46 PM, Greg Freemyer wrote:
I don't know the details of how read-ahead caching works with respect to the low level disk reads, but I see 110 MB/sec of read speed using "dd if=/dev/sdb of=/dev/null"
Are you serious? Only 512 byes at a time? Why not suck a cache full, or a half cache or a quarter cache or whatever the size that matches the way the disk is writing (probably 4K for a file system)? -- /"\ \ / ASCII Ribbon Campaign X Against HTML Mail / \ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 07/03/2014 05:46 PM, Greg Freemyer wrote:
One receiving disk maxes out at 108 Two receiving disks: Second drive takes advantage of first being maxed (not ready to receive) and it (#2) receives while #1 is busy. They take turns Agreed. but why don't I get better USB-3 bus utilization. A single drive does gets about 20% bus utilization. Why don't I get 40% with 2 drives?
By the time you have 5 targets, you've probably maxed out your sending disk and the wire is waiting for data. I'm not sure what you are saying. With "dc3dd wipe=/dev/sdb" all of the zeros come straight from the CPU. It can generate way more than 200MB/sec of zeros. The bottleneck is not the data source. It has to be the USB-3 bus.
Then the logical test is attach some kind of null sink to the hub, something that acts like /dev/null, and see how fast to can write to it without worrying about cache and device bandwidth. -- /"\ \ / ASCII Ribbon Campaign X Against HTML Mail / \ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 07/03/2014 05:46 PM, Greg Freemyer wrote:
One receiving disk maxes out at 108 Two receiving disks: Second drive takes advantage of first being maxed (not ready to receive) and it (#2) receives while #1 is busy. They take turns Agreed. but why don't I get better USB-3 bus utilization. A single drive does gets about 20% bus utilization. Why don't I get 40% with 2 drives?
By the time you have 5 targets, you've probably maxed out your sending disk and the wire is waiting for data. I'm not sure what you are saying. With "dc3dd wipe=/dev/sdb" all of the zeros come straight from the CPU. It can generate way more than 200MB/sec of zeros. The bottleneck is not the data source. It has to be the USB-3 bus.
There's a 'maybe' there. I think I mentioned that the old PDP-11 DMA disk controller -- I forget the model # - could support 8 drives but only one data channel, though it could do overlapped seeks. There was a parallelism that could be used by an astute programmer. I don't hear you telling us that you have low level control over the hub. I suspect that after you tell it to write to #1 and it returns busy so you now direct to #2 there is a negotiation overhead to switch to #2 and find that #2 is not busy. I'm not sure how much of this goes on in the linux lower level of the /dev/sdb drier and how much goes on in the hub; possibly both and some handshaking. But the issue is that this is too low level, unlike the PDP device where it is all exposed. I don't have any idea how you could verify or metricate this. -- /"\ \ / ASCII Ribbon Campaign X Against HTML Mail / \ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (4)
-
Anton Aylward
-
Dirk Gently
-
Greg Freemyer
-
John Andersen