On Wed, Dec 7, 2016 at 6:31 PM, Greg Freemyer
On Wed, Dec 7, 2016 at 2:56 AM, Per Jessen
wrote: Greg Freemyer wrote:
On Tue, Dec 6, 2016 at 2:46 AM, Per Jessen
wrote: Greg Freemyer wrote:
Got my Drobo today. So far I'm not impressed.
Pros:
I plugged in 8 1TB drives and it saw them and let me thin provision a 16TB volume.
That sounds like a major pro (or con) - 16Tb with only 8Tb space :-) I guess it automagically dealt with RAID levels and such?
Thin provisioning works like that, and yes as I think about it, it is a major pro.
Okay, I thought it was a typo - with LVM, thin provisioning just means the space isn't all allocated, but I doubt if you can allocate a volume that is bigger than what is available in a PV. Well, it has never occurred to me to try it :-) I usually allocate what I need, then extend later on when needed.
A cool thin provisioning feature I just noticed.
This is with a Windows 10 client over iSCSI.
I populated about 3TB of data onto the volume yesterday, composed of about 1 million files. Today I deleted them and my overall array usage dropped.
That means the SCSI commands to unallocate (or erase) unused sectors was issued to the Drobo and the Drobo actually removed the sectors from the allocated data blocks.
I'm extremely surprised that happened. I wasn't even trying to test it. I just noticed that my overall data usage dropped on the Drobo when I did a large folder delete.
I need to setup a iSCSI volume for openSUSE to access, but at least for now as soon as I create a new iSCSI volume via the user interface, it gets accessed by that Windows PC. I'll try to do some more experimenting.
I tried to create a iSCSI volume, then mount if from openSUSE 42.2. Creating it was easy. No success on the openSUSE end, but that might be operator error since I don't know what I'm doing. I ordered a second 10TB drive last week. I also pulled out one of the 1TB drives to leave a empty drive bay for the 10TB. It took about 12 hours to rebuild then. A few minutes ago I popped the new (second) 10 TB drive in. It took a couple minutes, then the overall status of the Drobo showed the extra capacity and the all drives were green. I went ahead and pulled out another 1TB drive to make room for the next 10TB drive install. The Drobo is reporting 24 hours to do the rebuild this time. I'm very surprised. I still only have about 3TB of data on the array. With such long rebuild times, it would be nice if the unit had a "migrate data" feature that allowed the data to be migrated off of a drive without creating a day long window of vulnerability. If it has that, I missed it. This is definitely not the fastest unit in the world, but for the $160 (or so) I paid for it, I'm very happy. I've got it set to 1-disk redundancy. I gather that means a collection of Raid-1 and Raid-5 sub-volumes are being created and aggregated together. The data layout is hidden from me, but it is fully leveraging both of the 10TB drives now that I have 2 installed. With only one, most of the first 10TB went unusable. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org