Greg Freemyer wrote:
On Tue, Dec 6, 2016 at 2:46 AM, Per Jessen
wrote: Greg Freemyer wrote:
Got my Drobo today. So far I'm not impressed.
Pros:
I plugged in 8 1TB drives and it saw them and let me thin provision a 16TB volume.
That sounds like a major pro (or con) - 16Tb with only 8Tb space :-) I guess it automagically dealt with RAID levels and such?
Thin provisioning works like that, and yes as I think about it, it is a major pro.
Okay, I thought it was a typo - with LVM, thin provisioning just means the space isn't all allocated, but I doubt if you can allocate a volume that is bigger than what is available in a PV. Well, it has never occurred to me to try it :-) I usually allocate what I need, then extend later on when needed.
The user manual does recommend that this particular Drobo only provide iSCSI volumes to one host. It's not a mandate, but it is how the best performance is achieved.
Normally the main advantage of SCSI is the concurrency, but your Drobo is hardly a full-blooded SAN box. Maybe it's only got one processor or all the drives are on a single SATA bus. Mind you, with the usage pattern you have in mind, performance isn't critical, right?
Newer / higher performing Drobo's can support multiple hosts. And the FS versions include a true fileserving capability.
As in NFS? Yes, these days on GigE and with NFSv4 there's little gained with iSCSI over NFS. You can boot from iSCSI, which is cool, but NFS is quite straight forward. Setting up iSCSI with authentication, redundancy and multi-pathing can be a little daunting.
I pulled out one of the 1TB drives and popped in my 10TB drive. It recognized it and rebuilt my volume. The rebuild only took a minute or two, but I only have 6GB of data on the volume and it is thin provisioned, so not much data to move around.
Resync'ing 16Gb on plain ol' spinning SATA drives will take a while longer. If the time-to-resync on the Drobo varies with the amount of data, the logic is at a different level. Interesting, I think.
I think that is part of the thin provisioning. As a volume gets filled, more free data blocks are provisioned. When a drive fails, only the provisioned data blocks have to be rearranged.
I can see a major advantage in reduced recovery times when only data-in-use has to be resynched. Of course, when a drive is 80% full, that is still only 20% saved, but even so.
I put 1.5TB of data on the unit overnight. This morning I pulled one of the 1TB drives to see what it will do. SInce I have plenty of spare space on the spindles, A new raid arrangement is being laid down. At the end I will once again be able to handle a drive failure. An hour later, it is still in the rebuild process and the estimate is 12 more hours to complete.
Wow. Surely that's not good - that's too long to be running degraded. I must have misunderstood - you said only allocated data needs to be resynched/re-arranged?
This Drobo only has 1 1-Gbit port. I'm seeing about 30 MB/sec speeds., but I'm sharing a single NIC on the server, so that's 30MB/sec incoming to the server and 30MB/sec outgoing to the Drobo. 60 MB/sec total. Not too bad for a 1-Gbit port. I'm assuming that single port of the server is the bottleneck.
You ought to get more out of it, I would say. Twice as much. Just downloading an ISO from an openSUSE mirror site, I can drive our uplink to 50Mb/s, doing an scp copy between two non-optimized local systems I get 80MB/s. Doing it full-duplex (both ways concurrently), I can easily run it up to 65+65Mb/s. With iSCSI it ought to exceed that (no encryption for instance). There are other limiting factors, depending on what's in the Drobo - SATA bus speed, and PCI/e ditto. Maybe also your switch?
As I said for peak performance I need to establish a dedicated storage LAN. which doesn't carry any front office traffic. That will take a week or so to setup since I need to get a few miscellaneous parts.
I don't think your LAN is the limiting factor. -- Per Jessen, Zürich (-0.6°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org