[opensuse] LVM / software raid / dynamic raid? For a backup server of stable data.
This email might be offtopic, not sure. I have a minimum of 10 TB of data I want to consolidate off of multiple USB drives to free them up. The data is almost exclusively static and rarely accessed, but I need to maintain it. I have 2 copies of this data in most cases currently, but it is spread around multiple USB3 drives. (I have dozens of them). The plan for now is just to consolidate one copy of the data. I bought a 10 TB SATA drive to hold a first big chunk. I expect it will get filled quickly as I start to consolidate my backup copies, so I want to be able to grow the volume holding the data by adding disks to the pool and extending the volume. I suspect I will also want to have more resiliency at some point. (ie. raid 0 => raid 5 => raid 6) If I truly had confidence in this storage pool I might eliminate both copies of the data that is on USB drives currently. But even with raid 6, I think I would worry about a total LVM or Volume crash Or even a user error! Even though I know LVM and MDraid somewhat, I don't know if is "dynamic" and "reliable" enough for what I want. And advice out there? Goals: - Create a fileserver that I can add drives to from time to time and grow it's capacity. Probably 10TB drives so I don't have too many spindles in the mix. When bigger drives become available, I'd prefer to use them, so being stuck will all the same size drives is a negative. - Performance is non-critical. I've used LTO-4 tapes to do this in the past, but I hope online is a better choice now. With LTO-4, once I had a new data set (typically 100GB - 2TB) I would make a backup with tape and put it away for the time I needed to ensure I still had it. (Often years). - Share the exported volumes with Windows PCs. (Not critical, but preferred) - have the ability to start the drive pool with a single drive and add to it over time - Allow the added drive to be either as SATA USB3.1 - Allow the Raid "protection level" to be adjusted for a given volume from time to time. === I was actually planning to do this with Windows and its "Storage Spaces" solution. I just this afternoon put a new 10TB drive in a windows PC and added it to the "Storage Space" (like LVM). But my reading says the "resilience level" of a volume has to be set at the time the volume is created. I can grow it later, but I can't change it from a raid 0, to a raid 1, to a raid 5, etc. Thanks Greg -- Greg Freemyer -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 11/18/2016 03:05 PM, Greg Freemyer wrote:
This email might be offtopic, not sure.
I have a minimum of 10 TB of data I want to consolidate off of multiple USB drives to free them up. The data is almost exclusively static and rarely accessed, but I need to maintain it.
I have 2 copies of this data in most cases currently, but it is spread around multiple USB3 drives. (I have dozens of them). The plan for now is just to consolidate one copy of the data.
I bought a 10 TB SATA drive to hold a first big chunk. I expect it will get filled quickly as I start to consolidate my backup copies, so I want to be able to grow the volume holding the data by adding disks to the pool and extending the volume.
I suspect I will also want to have more resiliency at some point. (ie. raid 0 => raid 5 => raid 6)
If I truly had confidence in this storage pool I might eliminate both copies of the data that is on USB drives currently. But even with raid 6, I think I would worry about a total LVM or Volume crash Or even a user error!
Even though I know LVM and MDraid somewhat, I don't know if is "dynamic" and "reliable" enough for what I want.
And advice out there?
Goals:
- Create a fileserver that I can add drives to from time to time and grow it's capacity. Probably 10TB drives so I don't have too many spindles in the mix. When bigger drives become available, I'd prefer to use them, so being stuck will all the same size drives is a negative.
- Performance is non-critical. I've used LTO-4 tapes to do this in the past, but I hope online is a better choice now. With LTO-4, once I had a new data set (typically 100GB - 2TB) I would make a backup with tape and put it away for the time I needed to ensure I still had it. (Often years).
- Share the exported volumes with Windows PCs. (Not critical, but preferred)
- have the ability to start the drive pool with a single drive and add to it over time
- Allow the added drive to be either as SATA USB3.1
- Allow the Raid "protection level" to be adjusted for a given volume from time to time.
=== I was actually planning to do this with Windows and its "Storage Spaces" solution. I just this afternoon put a new 10TB drive in a windows PC and added it to the "Storage Space" (like LVM).
But my reading says the "resilience level" of a volume has to be set at the time the volume is created. I can grow it later, but I can't change it from a raid 0, to a raid 1, to a raid 5, etc.
Hi Greg, Why don't you try a Drobo? http://www.drobo.com/storage-products/5n/ But I don't think it will change RAID configuration on-the-fly. BTW, I'd recommend against using RAID-5 for anything if reliably is very important. The most stressful time for a RAID is when it's rebuilding after loosing a drive, and if you loose a second disk during a rebuild you're toast. RAID-6 gives you significant cushion during rebuilds. BTW, aren't all drives SATA? If a drive is USB it probably has a converter to SATA which just adds a layer of complexity to break. Regards, Lew -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, Nov 18, 2016 at 6:19 PM, Lew Wolfgang <wolfgang@sweet-haven.com> wrote:
On 11/18/2016 03:05 PM, Greg Freemyer wrote:
This email might be offtopic, not sure.
I have a minimum of 10 TB of data I want to consolidate off of multiple USB drives to free them up. The data is almost exclusively static and rarely accessed, but I need to maintain it.
I have 2 copies of this data in most cases currently, but it is spread around multiple USB3 drives. (I have dozens of them). The plan for now is just to consolidate one copy of the data.
I bought a 10 TB SATA drive to hold a first big chunk. I expect it will get filled quickly as I start to consolidate my backup copies, so I want to be able to grow the volume holding the data by adding disks to the pool and extending the volume.
I suspect I will also want to have more resiliency at some point. (ie. raid 0 => raid 5 => raid 6)
If I truly had confidence in this storage pool I might eliminate both copies of the data that is on USB drives currently. But even with raid 6, I think I would worry about a total LVM or Volume crash Or even a user error!
Even though I know LVM and MDraid somewhat, I don't know if is "dynamic" and "reliable" enough for what I want.
And advice out there?
Goals:
- Create a fileserver that I can add drives to from time to time and grow it's capacity. Probably 10TB drives so I don't have too many spindles in the mix. When bigger drives become available, I'd prefer to use them, so being stuck will all the same size drives is a negative.
- Performance is non-critical. I've used LTO-4 tapes to do this in the past, but I hope online is a better choice now. With LTO-4, once I had a new data set (typically 100GB - 2TB) I would make a backup with tape and put it away for the time I needed to ensure I still had it. (Often years).
- Share the exported volumes with Windows PCs. (Not critical, but preferred)
- have the ability to start the drive pool with a single drive and add to it over time
- Allow the added drive to be either as SATA USB3.1
- Allow the Raid "protection level" to be adjusted for a given volume from time to time.
=== I was actually planning to do this with Windows and its "Storage Spaces" solution. I just this afternoon put a new 10TB drive in a windows PC and added it to the "Storage Space" (like LVM).
But my reading says the "resilience level" of a volume has to be set at the time the volume is created. I can grow it later, but I can't change it from a raid 0, to a raid 1, to a raid 5, etc.
Hi Greg,
Why don't you try a Drobo?
I really like what I've read about the drobo, so it is an option. Fairly expensive for the chassis as I recall, but I don't see pricing on their site right now. It is definitely an option I'm going to compare to. My plan as of 3 hours ago was blown out of the water when I started digging into "Storage Spaces" and realized how inflexible it was. fyi: This fileserver is for my company, not a customer, so I'm fairly cost conscious. On the other hand, having a bunch of partially full USB drives isn't very cost effective at this point.
But I don't think it will change RAID configuration on-the-fly.
Actually, I think it can. They say one of the cons of traditional raid is that lack of flexibility: http://www.drobo.com/drobo/beyondraid/
BTW, I'd recommend against using RAID-5 for anything if reliably is very important. The most stressful time for a RAID is when it's rebuilding after loosing a drive, and if you loose a second disk during a rebuild you're toast. RAID-6 gives you significant cushion during rebuilds.
Yeah, RAID 5 would mandate keeping another copy of the data elsewhere. But I'm not sure RAID 6 would make me comfortable enough either to eliminate that second physical copy.
BTW, aren't all drives SATA? If a drive is USB it probably has a converter to SATA which just adds a layer of complexity to break.
All the USB-3 rotating drives I've disassembled did indeed have a SATA internally. But I already own numerous USB-3 enclosed SATA drives (1TB - 5TB). No desire to break them out of their cases. I'll probably buy more 10TB+ size drives to add internally to the chassis as funds are available. FYI: But don't forget NVMe / SCSI / etc.
Regards, Lew
BTW: DId I miss the part where you told me how to do this with Linux? Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 11/19/2016 01:35 AM, Greg Freemyer wrote:
Why don't you try a Drobo?
I really like what I've read about the drobo, so it is an option. Fairly expensive for the chassis as I recall, but I don't see pricing on their site right now.
- maybe under "Shop" regards -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, Nov 19, 2016 at 4:17 AM, ellanios82 <ellanios82@gmail.com> wrote:
On 11/19/2016 01:35 AM, Greg Freemyer wrote:
Why don't you try a Drobo?
I really like what I've read about the drobo, so it is an option. Fairly expensive for the chassis as I recall, but I don't see pricing on their site right now.
- maybe under "Shop"
So I should save my money and buy some glasses! I'm color blind. I wonder if those new color blind correcting glasses would make things like that pop more? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer wrote:
fyi: This fileserver is for my company, not a customer, so I'm fairly cost conscious. On the other hand, having a bunch of partially full USB drives isn't very cost effective at this point.
So you would like a file-server with high redundancy levels, although not necessarily with high availability features (redundant power supplies, multi pathing). RAID6 is almost certain what you want, but you want to build it up gradually and without major investment up front. NetApp or Nexenta come to mind. Maybe 2nd hand. Hooked up to a plain PC. I don't quite understand your desire to alter RAID levels, but if it's about staging the cost, it can be done provided you have enough 3.5" slots. Get a drive array with 14 x 3.5" slots, and a suitable controller. (e.g. LSI, 3ware). Add a plain PC with openSUSE. Install one 10Tb disk, create PV1, add to VG, create LV, copy your data to it. When your wallet is up to it, add 2 more drives, use mdraid to create one RAID1, create PV2, add to VG, stop allocation on PV1, migrate to PV2. When your wallet is up to it again, add 2 more drives, use mdraid to create one RAID5 with the drive you free up, create PV3, add to VG, stop allocation on PV2, migrate to PV3. When your wallet is up to it again, add 2 more drives, use mdraid to create one RAID6 with the 2 drives you freed up, create PV4, add to VG, stop allocation on PV3, migrate to PV4. You now have 7 drives, 3 not in use, and a total of 20Tb storage in RAID6. You could add one more drive, create another RAID6 and then combine them into a RAID1, and you would have 4-drive redundancy. That takes up 8 slots, to move to bigger drives in the future, you could buy 6 x 12Tb drives, rinse, repeat. -- Per Jessen, Zürich (8.1°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, Nov 18, 2016 at 6:19 PM, Lew Wolfgang <wolfgang@sweet-haven.com> wrote:
Hi Greg,
Why don't you try a Drobo?
Lew, I'm thinking seriously of getting a Drobo 5 bay. More flexible than what I can do with openSUSE from what I see and a used chassis isn't bad at all. And I don't have an existing openSUSE box I could dedicate to this, so I'd have to buy/build a PC to do this. I'd likely spend as much as a Drobo chassis would cost, even a new chassis. Do you have any experience with a Drobo? The Drobo FS is a 2010 model retired chassis, but it looks like it would be fine for my need. (ie. relatively low performance i/o interfaced via 1 Gbit NIC) My biggest question is if it would work with current generations largest drives. And if so there a point if quits working. ie. I just got a 10TB yesterday. I imagine 12+TB will be available next year, etc. I've seen a lot of PATA/SATA controllers that quit working at 512GB, then more at 2TB, etc. Also, I wonder how big a volume the older Drobo FS can support. The replacement unit is the Drobo 5N and it says it can go up to 64TB on a volume. Thanks Greg -- Greg Freemyer -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 11/19/2016 06:37 PM, Greg Freemyer wrote:
On Fri, Nov 18, 2016 at 6:19 PM, Lew Wolfgang <wolfgang@sweet-haven.com> wrote:
Hi Greg,
Why don't you try a Drobo?
I'm thinking seriously of getting a Drobo 5 bay. More flexible than what I can do with openSUSE from what I see and a used chassis isn't bad at all. And I don't have an existing openSUSE box I could dedicate to this, so I'd have to buy/build a PC to do this. I'd likely spend as much as a Drobo chassis would cost, even a new chassis. Do you have any experience with a Drobo? The Drobo FS is a 2010 model retired chassis, but it looks like it would be fine for my need. (ie. relatively low performance i/o interfaced via 1 Gbit NIC)
Actually I don't have Drobo experience, but I've heard good things about them. I've got lots of experience with server-class RAID controllers, but I've been thinking about getting the 5n for home use. But as I look at the documentation I'm getting a bit queasy about controlling it from Linux. There are apps for Windows and Apple, but Linux isn't mentioned. I'd like to see evidence of a general web interface for configuration and control. I'd also like to see NFS and rsync server support, but it's unclear if the Drobo does all that. I guess that CIFS would be okay.
My biggest question is if it would work with current generations largest drives. And if so there a point if quits working. ie. I just got a 10TB yesterday. I imagine 12+TB will be available next year, etc. I've seen a lot of PATA/SATA controllers that quit working at 512GB, then more at 2TB, etc.
I've seen server class hotswap chassis have problems with SATA transfer speeds. Drive capacity hasn't been a problem in my experience. I've even seen newer chassis that fail to negotiate with slower-speed SATA disks. Indeed, I'm having to retire four 24-drive chassis now because of connection compatibility issues.
Also, I wonder how big a volume the older Drobo FS can support. The replacement unit is the Drobo 5N and it says it can go up to 64TB on a volume.
I don't know about that. I think I'll continue to look around for now. Regards, Lew -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sun, Nov 20, 2016 at 10:52 PM, Lew Wolfgang <wolfgang@sweet-haven.com> wrote:
On 11/19/2016 06:37 PM, Greg Freemyer wrote:
On Fri, Nov 18, 2016 at 6:19 PM, Lew Wolfgang <wolfgang@sweet-haven.com> wrote:
Hi Greg,
Why don't you try a Drobo?
Lew,
I'm thinking seriously of getting a Drobo 5 bay. More flexible than what I can do with openSUSE from what I see and a used chassis isn't bad at all. And I don't have an existing openSUSE box I could dedicate to this, so I'd have to buy/build a PC to do this. I'd likely spend as much as a Drobo chassis would cost, even a new chassis. Do you have any experience with a Drobo? The Drobo FS is a 2010 model retired chassis, but it looks like it would be fine for my need. (ie. relatively low performance i/o interfaced via 1 Gbit NIC)
Actually I don't have Drobo experience, but I've heard good things about them. I've got lots of experience with server-class RAID controllers, but I've been thinking about getting the 5n for home use.
But as I look at the documentation I'm getting a bit queasy about controlling it from Linux. There are apps for Windows and Apple, but Linux isn't mentioned. I'd like to see evidence of a general web interface for configuration and control.
I have plenty of windows boxes, so that isn't an issue. (I can think of 4 in my lab).
I'd also like to see NFS and rsync server support, but it's unclear if the Drobo does all that. I guess that CIFS would be okay.
Different models have different support. Looking on ebay, the 5N is the most expensive for a used unit. There's one that has the bidding end tomorrow. $360 right now. I bid up to $350, but I'm done. Above $350, I'd just buy a new unit I think. Anyway, there's a model with iSCSI support and another one with USB3 support. If I can get a really good price on an 8-bay unit, that would be my preference. I just don't want to pay $1500 for it. Anyway the 5N was my first thought, but I could get by with iSCSI or USB3 I think. I'm going to watch ebay and see if I can snag a deal.
My biggest question is if it would work with current generations largest drives. And if so there a point if quits working. ie. I just got a 10TB yesterday. I imagine 12+TB will be available next year, etc. I've seen a lot of PATA/SATA controllers that quit working at 512GB, then more at 2TB, etc.
I've seen server class hotswap chassis have problems with SATA transfer speeds. Drive capacity hasn't been a problem in my experience. I've even seen newer chassis that fail to negotiate with slower-speed SATA disks. Indeed, I'm having to retire four 24-drive chassis now because of connection compatibility issues.
I used to do a lot of high-end raid enclosure work myself (certified SAN architect if you can believe it). But it was Compaq enclosures and you had to buy the SCSI drives from them. I don't recall ever working with SATA drives in an enclosure like that.
Also, I wonder how big a volume the older Drobo FS can support. The replacement unit is the Drobo 5N and it says it can go up to 64TB on a volume.
I don't know about that. I think I'll continue to look around for now.
I suspect I'll buy one in the next month. I'll let you know if I do.
Regards, Lew
Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, Nov 18, 2016 at 6:19 PM, Lew Wolfgang <wolfgang@sweet-haven.com> wrote:
On 11/18/2016 03:05 PM, Greg Freemyer wrote:
This email might be offtopic, not sure.
I have a minimum of 10 TB of data I want to consolidate off of multiple USB drives to free them up. The data is almost exclusively static and rarely accessed, but I need to maintain it.
I have 2 copies of this data in most cases currently, but it is spread around multiple USB3 drives. (I have dozens of them). The plan for now is just to consolidate one copy of the data.
I bought a 10 TB SATA drive to hold a first big chunk. I expect it will get filled quickly as I start to consolidate my backup copies, so I want to be able to grow the volume holding the data by adding disks to the pool and extending the volume.
I suspect I will also want to have more resiliency at some point. (ie. raid 0 => raid 5 => raid 6)
If I truly had confidence in this storage pool I might eliminate both copies of the data that is on USB drives currently. But even with raid 6, I think I would worry about a total LVM or Volume crash Or even a user error!
Even though I know LVM and MDraid somewhat, I don't know if is "dynamic" and "reliable" enough for what I want.
And advice out there?
Goals:
- Create a fileserver that I can add drives to from time to time and grow it's capacity. Probably 10TB drives so I don't have too many spindles in the mix. When bigger drives become available, I'd prefer to use them, so being stuck will all the same size drives is a negative.
- Performance is non-critical. I've used LTO-4 tapes to do this in the past, but I hope online is a better choice now. With LTO-4, once I had a new data set (typically 100GB - 2TB) I would make a backup with tape and put it away for the time I needed to ensure I still had it. (Often years).
- Share the exported volumes with Windows PCs. (Not critical, but preferred)
- have the ability to start the drive pool with a single drive and add to it over time
- Allow the added drive to be either as SATA USB3.1
- Allow the Raid "protection level" to be adjusted for a given volume from time to time.
=== I was actually planning to do this with Windows and its "Storage Spaces" solution. I just this afternoon put a new 10TB drive in a windows PC and added it to the "Storage Space" (like LVM).
But my reading says the "resilience level" of a volume has to be set at the time the volume is created. I can grow it later, but I can't change it from a raid 0, to a raid 1, to a raid 5, etc.
Hi Greg,
Why don't you try a Drobo?
All, I went with a Drobo. I snagged a used 8-bay on eBay for under $200 including shipping. In theory it can meet all my goals and that price is crazy low. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, Nov 23, 2016 at 12:16 PM, Greg Freemyer <greg.freemyer@gmail.com> wrote:
On Fri, Nov 18, 2016 at 6:19 PM, Lew Wolfgang <wolfgang@sweet-haven.com> wrote:
On 11/18/2016 03:05 PM, Greg Freemyer wrote:
This email might be offtopic, not sure.
I have a minimum of 10 TB of data I want to consolidate off of multiple USB drives to free them up. The data is almost exclusively static and rarely accessed, but I need to maintain it.
I have 2 copies of this data in most cases currently, but it is spread around multiple USB3 drives. (I have dozens of them). The plan for now is just to consolidate one copy of the data.
I bought a 10 TB SATA drive to hold a first big chunk. I expect it will get filled quickly as I start to consolidate my backup copies, so I want to be able to grow the volume holding the data by adding disks to the pool and extending the volume.
I suspect I will also want to have more resiliency at some point. (ie. raid 0 => raid 5 => raid 6)
If I truly had confidence in this storage pool I might eliminate both copies of the data that is on USB drives currently. But even with raid 6, I think I would worry about a total LVM or Volume crash Or even a user error!
Even though I know LVM and MDraid somewhat, I don't know if is "dynamic" and "reliable" enough for what I want.
And advice out there?
Goals:
- Create a fileserver that I can add drives to from time to time and grow it's capacity. Probably 10TB drives so I don't have too many spindles in the mix. When bigger drives become available, I'd prefer to use them, so being stuck will all the same size drives is a negative.
- Performance is non-critical. I've used LTO-4 tapes to do this in the past, but I hope online is a better choice now. With LTO-4, once I had a new data set (typically 100GB - 2TB) I would make a backup with tape and put it away for the time I needed to ensure I still had it. (Often years).
- Share the exported volumes with Windows PCs. (Not critical, but preferred)
- have the ability to start the drive pool with a single drive and add to it over time
- Allow the added drive to be either as SATA USB3.1
- Allow the Raid "protection level" to be adjusted for a given volume from time to time.
=== I was actually planning to do this with Windows and its "Storage Spaces" solution. I just this afternoon put a new 10TB drive in a windows PC and added it to the "Storage Space" (like LVM).
But my reading says the "resilience level" of a volume has to be set at the time the volume is created. I can grow it later, but I can't change it from a raid 0, to a raid 1, to a raid 5, etc.
Hi Greg,
Why don't you try a Drobo?
All,
I went with a Drobo. I snagged a used 8-bay on eBay for under $200 including shipping.
In theory it can meet all my goals and that price is crazy low.
Greg
Got my Drobo today. So far I'm not impressed. Pros: I plugged in 8 1TB drives and it saw them and let me thin provision a 16TB volume. I pulled out one of the 1TB drives and popped in my 10TB drive. It recognized it and rebuilt my volume. The rebuild only took a minute or two, but I only have 6GB of data on the volume and it is thin provisioned, so not much data to move around. In the user's forum they say you can use large drives, but no single logical volume can be over 16TB. I can live with that. The main dashboard tool is available only for Mac / Win but they also have a Web Admin interface you can install. I haven't tried that yet. See https://myproducts.drobo.com/system/resources/85252/original/DroboApps_Insta... Regardless, you have to do the initial setup from a Windows / Mac PC. It has an iSCSI target feature. Cons: While I can ping the manually configured IP, I can't yet connect the management dashboard to that IP. (In theory firewall holes are open. I tried with both Mac and PC clients.) I haven't gotten the iSCSI interface to work yet. As a test, after putting in the 10TB drive I tried to create a second 16TB thin provisioned volume. It seemed to work, but has been sitting at "restarting drobo" for the last hour. == An eBay negative. This was advertized as a DroboPro NAS. There isn't such a product. I assumed the NAS nomenclature meant it was one of Drobo's fileserver products. It's not. This one is iSCSI / Firewire / USB-2. I hope I can get the iSCSI to work. Then I can connect it up to a server to share it out. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer wrote:
Got my Drobo today. So far I'm not impressed.
Pros:
I plugged in 8 1TB drives and it saw them and let me thin provision a 16TB volume.
That sounds like a major pro (or con) - 16Tb with only 8Tb space :-) I guess it automagically dealt with RAID levels and such?
I pulled out one of the 1TB drives and popped in my 10TB drive. It recognized it and rebuilt my volume. The rebuild only took a minute or two, but I only have 6GB of data on the volume and it is thin provisioned, so not much data to move around.
Resync'ing 16Gb on plain ol' spinning SATA drives will take a while longer. If the time-to-resync on the Drobo varies with the amount of data, the logic is at a different level. Interesting, I think.
Regardless, you have to do the initial setup from a Windows / Mac PC.
I probably would not count that as a 'pro'. :-) -- Per Jessen, Zürich (0.2°C) http://www.cloudsuisse.com/ - your owncloud, hosted in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, Dec 6, 2016 at 2:46 AM, Per Jessen <per@computer.org> wrote:
Greg Freemyer wrote:
Got my Drobo today. So far I'm not impressed.
Pros:
I plugged in 8 1TB drives and it saw them and let me thin provision a 16TB volume.
That sounds like a major pro (or con) - 16Tb with only 8Tb space :-) I guess it automagically dealt with RAID levels and such?
Thin provisioning works like that, and yes as I think about it, it is a major pro. I now have 2 16TB thin provisioned volumes on the Drobo. (The second one eventually provisioned last night.) I can setup several thin provisioned volumes now, then just monitor the overall cumulative disk usage and add (replace) disks as time goes by and needs demand. The user manual does recommend that this particular Drobo only provide iSCSI volumes to one host. It's not a mandate, but it is how the best performance is achieved. Newer / higher performing Drobo's can support multiple hosts. And the FS versions include a true fileserving capability.
I pulled out one of the 1TB drives and popped in my 10TB drive. It recognized it and rebuilt my volume. The rebuild only took a minute or two, but I only have 6GB of data on the volume and it is thin provisioned, so not much data to move around.
Resync'ing 16Gb on plain ol' spinning SATA drives will take a while longer. If the time-to-resync on the Drobo varies with the amount of data, the logic is at a different level. Interesting, I think.
I think that is part of the thin provisioning. As a volume gets filled, more free data blocks are provisioned. When a drive fails, only the provisioned data blocks have to be rearranged. I put 1.5TB of data on the unit overnight. This morning I pulled one of the 1TB drives to see what it will do. SInce I have plenty of spare space on the spindles, A new raid arrangement is being laid down. At the end I will once again be able to handle a drive failure. An hour later, it is still in the rebuild process and the estimate is 12 more hours to complete. Basically it seems I have a choice of configuring the entire Drobo to provide either Raid5 level protection or Raid6 level. The details of how that is handled internally is up to the Drobo.
Regardless, you have to do the initial setup from a Windows / Mac PC.
I probably would not count that as a 'pro'. :-)
Agreed fyi: I now have the iSCSI feature working with Windows 10 as a client. 2 volumes 16 TB volumes provisioned. This Drobo only has 1 1-Gbit port. I'm seeing about 30 MB/sec speeds., but I'm sharing a single NIC on the server, so that's 30MB/sec incoming to the server and 30MB/sec outgoing to the Drobo. 60 MB/sec total. Not too bad for a 1-Gbit port. I'm assuming that single port of the server is the bottleneck. The next step is clearly to move the Drobo traffic onto a dedicated storage LAN. I was a bit frustrated with the unit yesterday, but right now I'm very pleased with it. I will try to provision a volume for an openSUSE box to use and for a Mac to use. That will be for functionality testing, not performance testing. As I said for peak performance I need to establish a dedicated storage LAN. which doesn't carry any front office traffic. That will take a week or so to setup since I need to get a few miscellaneous parts. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
I recommend using mdraid and LVM2. You can add disks to mdraids and you can change raid levels. See https://serverhorror.wordpress.com/2011/01/27/migrating-raid-levels-in-linux... Using md devices as physical devices for a lvm volume adds even more flexibility. If you swap old disks for new ones, you can add the new devices (as new md device) to expand the volume group, use pvmove to move data online to the new devices and then drop old devices from the volume group. Or better: you add new md devices and extend your logical volume. If you use large disks, use mirrored devices whenever possible. Raid5 is not an option (as Lew already noted) and raid 6 is really slow! Growing a raid6 with another 10TB drive may take days! So best is to add always 2 devices, create a mirror and add it to the logical volume. See manpages of mdadm, pvcreate, pvmove, vgextendd, vgreduce, lvextend On 19.11.2016 00:05, Greg Freemyer wrote:
This email might be offtopic, not sure.
I have a minimum of 10 TB of data I want to consolidate off of multiple USB drives to free them up. The data is almost exclusively static and rarely accessed, but I need to maintain it.
I have 2 copies of this data in most cases currently, but it is spread around multiple USB3 drives. (I have dozens of them). The plan for now is just to consolidate one copy of the data.
I bought a 10 TB SATA drive to hold a first big chunk. I expect it will get filled quickly as I start to consolidate my backup copies, so I want to be able to grow the volume holding the data by adding disks to the pool and extending the volume.
I suspect I will also want to have more resiliency at some point. (ie. raid 0 => raid 5 => raid 6)
If I truly had confidence in this storage pool I might eliminate both copies of the data that is on USB drives currently. But even with raid 6, I think I would worry about a total LVM or Volume crash Or even a user error!
Even though I know LVM and MDraid somewhat, I don't know if is "dynamic" and "reliable" enough for what I want.
And advice out there?
Goals:
- Create a fileserver that I can add drives to from time to time and grow it's capacity. Probably 10TB drives so I don't have too many spindles in the mix. When bigger drives become available, I'd prefer to use them, so being stuck will all the same size drives is a negative.
- Performance is non-critical. I've used LTO-4 tapes to do this in the past, but I hope online is a better choice now. With LTO-4, once I had a new data set (typically 100GB - 2TB) I would make a backup with tape and put it away for the time I needed to ensure I still had it. (Often years).
- Share the exported volumes with Windows PCs. (Not critical, but preferred)
- have the ability to start the drive pool with a single drive and add to it over time
- Allow the added drive to be either as SATA USB3.1
- Allow the Raid "protection level" to be adjusted for a given volume from time to time.
=== I was actually planning to do this with Windows and its "Storage Spaces" solution. I just this afternoon put a new 10TB drive in a windows PC and added it to the "Storage Space" (like LVM).
But my reading says the "resilience level" of a volume has to be set at the time the volume is created. I can grow it later, but I can't change it from a raid 0, to a raid 1, to a raid 5, etc.
Thanks Greg -- Greg Freemyer
On Fri, Nov 18, 2016 at 7:44 PM, Florian Gleixner <flo@redflo.de> wrote:
I recommend using mdraid and LVM2.
You can add disks to mdraids and you can change raid levels. See
https://serverhorror.wordpress.com/2011/01/27/migrating-raid-levels-in-linux...
Using md devices as physical devices for a lvm volume adds even more flexibility. If you swap old disks for new ones, you can add the new devices (as new md device) to expand the volume group, use pvmove to move data online to the new devices and then drop old devices from the volume group. Or better: you add new md devices and extend your logical volume.
If you use large disks, use mirrored devices whenever possible. Raid5 is not an option (as Lew already noted) and raid 6 is really slow! Growing a raid6 with another 10TB drive may take days!
So best is to add always 2 devices, create a mirror and add it to the logical volume.
See manpages of mdadm, pvcreate, pvmove, vgextendd, vgreduce, lvextend
Thanks Florian, I was hoping MDraid / LVM2 was a flexible enough option. To address my desire to use the largest drives I can going forward to keep the spindle count down, I assume I could buy them in pairs. 2x10TB now, maybe 2x12TB in 6 months, etc. Then use LVM to aggregate the RAID1 pairs. Thanks Greg -- Greg Freemyer -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, 18 Nov 2016 23:47:46 -0500 Greg Freemyer <greg.freemyer@gmail.com> wrote:
On Fri, Nov 18, 2016 at 7:44 PM, Florian Gleixner <flo@redflo.de> wrote:
I recommend using mdraid and LVM2.
You can add disks to mdraids and you can change raid levels. See
https://serverhorror.wordpress.com/2011/01/27/migrating-raid-levels-in-linux...
Using md devices as physical devices for a lvm volume adds even more flexibility. If you swap old disks for new ones, you can add the new devices (as new md device) to expand the volume group, use pvmove to move data online to the new devices and then drop old devices from the volume group. Or better: you add new md devices and extend your logical volume.
If you use large disks, use mirrored devices whenever possible. Raid5 is not an option (as Lew already noted) and raid 6 is really slow! Growing a raid6 with another 10TB drive may take days!
So best is to add always 2 devices, create a mirror and add it to the logical volume.
See manpages of mdadm, pvcreate, pvmove, vgextendd, vgreduce, lvextend
Thanks Florian,
I was hoping MDraid / LVM2 was a flexible enough option.
I'd second Florian's recommendation of md and lvm. Have used this for years without problems. But I'd also second what Per says. RAID is not backup! You need a separate backup as well, definitely offsite and preferably a different technology. But at least a completely separate system with separate disks. HTH, Dave -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, Nov 19, 2016 at 6:22 AM, Dave Howorth <dave@howorth.org.uk> wrote:
On Fri, 18 Nov 2016 23:47:46 -0500 Greg Freemyer <greg.freemyer@gmail.com> wrote:
On Fri, Nov 18, 2016 at 7:44 PM, Florian Gleixner <flo@redflo.de> wrote:
I recommend using mdraid and LVM2.
You can add disks to mdraids and you can change raid levels. See
https://serverhorror.wordpress.com/2011/01/27/migrating-raid-levels-in-linux...
Using md devices as physical devices for a lvm volume adds even more flexibility. If you swap old disks for new ones, you can add the new devices (as new md device) to expand the volume group, use pvmove to move data online to the new devices and then drop old devices from the volume group. Or better: you add new md devices and extend your logical volume.
If you use large disks, use mirrored devices whenever possible. Raid5 is not an option (as Lew already noted) and raid 6 is really slow! Growing a raid6 with another 10TB drive may take days!
So best is to add always 2 devices, create a mirror and add it to the logical volume.
See manpages of mdadm, pvcreate, pvmove, vgextendd, vgreduce, lvextend
Thanks Florian,
I was hoping MDraid / LVM2 was a flexible enough option.
I'd second Florian's recommendation of md and lvm. Have used this for years without problems.
But I'd also second what Per says. RAID is not backup! You need a separate backup as well, definitely offsite and preferably a different technology. But at least a completely separate system with separate disks.
HTH, Dave
Dave, this my backup. No work/analysis done with this copy. But, I still want more than raid 0 for this backup set in the long run. Currently I have at least 1 instance of all this data on 2 USB drives (dozens of drives total, but less than 100 (I think)). Going forward I want to buy 10TB drives (or larger) and consolidate the backups, freeing up the existing USBs for primary storage. I hadn't thought about offsite. Since this data is so static, maybe I should eventually build 2 and use DRDB to replicate to an offsite redundant copy. That actually sounds like some to add to the plan. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer wrote:
But, I still want more than raid 0 for this backup set in the long run. Currently I have at least 1 instance of all this data on 2 USB drives (dozens of drives total, but less than 100 (I think)).
Going forward I want to buy 10TB drives (or larger) and consolidate the backups, freeing up the existing USBs for primary storage.
I hadn't thought about offsite. Since this data is so static, maybe I should eventually build 2 and use DRDB to replicate to an offsite redundant copy. That actually sounds like some to add to the plan.
There is little point in a DRBD setup for data that doesn't change (or hardly ever changes). A daily rsync is probably easier. -- Per Jessen, Zürich (8.1°C) http://www.hostsuisse.com/ - virtual servers, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, Nov 19, 2016 at 8:36 AM, Per Jessen <per@computer.org> wrote:
Greg Freemyer wrote:
But, I still want more than raid 0 for this backup set in the long run. Currently I have at least 1 instance of all this data on 2 USB drives (dozens of drives total, but less than 100 (I think)).
Going forward I want to buy 10TB drives (or larger) and consolidate the backups, freeing up the existing USBs for primary storage.
I hadn't thought about offsite. Since this data is so static, maybe I should eventually build 2 and use DRDB to replicate to an offsite redundant copy. That actually sounds like some to add to the plan.
There is little point in a DRBD setup for data that doesn't change (or hardly ever changes). A daily rsync is probably easier.
rsync is probably reasonable for this data. I have an unmetered pipe at my office, but not my house. Just need another similar site, hopefully within a driveable distance. But that is phase 2. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 19 Nov 2016 08:44:31 -0500 Greg Freemyer <greg.freemyer@gmail.com> wrote:
On Sat, Nov 19, 2016 at 8:36 AM, Per Jessen <per@computer.org> wrote:
Greg Freemyer wrote:
But, I still want more than raid 0 for this backup set in the long run. Currently I have at least 1 instance of all this data on 2 USB drives (dozens of drives total, but less than 100 (I think)).
Going forward I want to buy 10TB drives (or larger) and consolidate the backups, freeing up the existing USBs for primary storage.
I hadn't thought about offsite. Since this data is so static, maybe I should eventually build 2 and use DRDB to replicate to an offsite redundant copy. That actually sounds like some to add to the plan.
There is little point in a DRBD setup for data that doesn't change (or hardly ever changes). A daily rsync is probably easier.
rsync is probably reasonable for this data. I have an unmetered pipe at my office, but not my house. Just need another similar site, hopefully within a driveable distance.
But that is phase 2.
No, offsite backup is phase 1! Copy it to a disk and hand carry it offsite to your home or wherever. If you want to get really sophisticated, have two backup disks and alternate them so all copies are never together at the same place and time. :) It used to be said that the highest bandwidth link between London and Manchester was a Ford van on the motorway full of magnetic tapes. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, Nov 19, 2016 at 7:18 PM, Dave Howorth <dave@howorth.org.uk> wrote:
On Sat, 19 Nov 2016 08:44:31 -0500 Greg Freemyer <greg.freemyer@gmail.com> wrote:
On Sat, Nov 19, 2016 at 8:36 AM, Per Jessen <per@computer.org> wrote:
Greg Freemyer wrote:
But, I still want more than raid 0 for this backup set in the long run. Currently I have at least 1 instance of all this data on 2 USB drives (dozens of drives total, but less than 100 (I think)).
Going forward I want to buy 10TB drives (or larger) and consolidate the backups, freeing up the existing USBs for primary storage.
I hadn't thought about offsite. Since this data is so static, maybe I should eventually build 2 and use DRDB to replicate to an offsite redundant copy. That actually sounds like some to add to the plan.
There is little point in a DRBD setup for data that doesn't change (or hardly ever changes). A daily rsync is probably easier.
rsync is probably reasonable for this data. I have an unmetered pipe at my office, but not my house. Just need another similar site, hopefully within a driveable distance.
But that is phase 2.
No, offsite backup is phase 1!
Copy it to a disk and hand carry it offsite to your home or wherever. If you want to get really sophisticated, have two backup disks and alternate them so all copies are never together at the same place and time. :)
It used to be said that the highest bandwidth link between London and Manchester was a Ford van on the motorway full of magnetic tapes.
For now my primary (active) and secondary (backup) copy are both kept onsite. I keep the secondary offline in a fireproof filing cabinet (interior concrete walls, weighs hundreds of pounds even empty). Once I build this I will have more susceptibility to fire; I might indeed start maintaining the current secondary copy offsite. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 11/19/2016 08:31 AM, Greg Freemyer wrote:
I hadn't thought about offsite. Since this data is so static, maybe I should eventually build 2 and use DRDB to replicate to an offsite redundant copy. That actually sounds like some to add to the plan.
I recall you mentioning that while you had unlimited bandwidth/volume at work you are limited @home. I have that problem too. I solved it this way. I got some could storage. As it turned out, for the volume and traffic involved it amounted to 'free'. YMMV. Its not as if you're making a high volume, high transaction volume, high bandwidth demand! I rsync'd the lot. All at one. All one day. In my case it amounted to about 1T. OK, so it didn't have to be all in one go, and my current provider offers a better deal if I do the transfer between 2am and 6am, but I wasn't willing to do the piecemeal management. Since you have this all as USB sticks you can do them one at a time and spread the load/impact. I paid the overrun price. Actually it wasn't much. It was less than some of the cloud storage charges I saw if I wanted more bandwidth etc. Today, with this ISP, doing it overnight, there wouldn't be any overrun cost. Like you, my data is mostly static. Its mostly photo archives. The older the more static! Every month I 'upload' another filmroll-equivalent and some edits. Perhaps 500-800M. Perhaps not; it depends how active I am. If I was a professional photographer or a very active one like Patrick not only the load in the cloud would be greater but traffic and demands would be greater. But as a professional I could charge that to the business. Oh right. Now, from your POV, you'll need to research cloud offerings. Betcha AWS won't be the most cost effective! I found that my ISP, Dreamhost' offers me a very basic domain (@antonaylward.com) plus lotsa-lotsa basic mailbox plus basic web site plus effectively-unlimited storage option for around $100/year. (That's an increasing billion and billion of Kanukistani pesos, but hey that's life north of the border!) So in my case, the 'cloud' was simply a a directory in my storage accessed via SSH. Again YMMV. I can see the logic of the cloud to business, but for many home and SMB, when you crank the numbers, its not economical. In your case, the cost of the consolidation, the cost of all this argument about RAID for reliability and expansion and backups, might, just might make cloud storage a viable option, not just from the POV of cost but when factoring in the stress and worrying about the decision tree. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer wrote:
This email might be offtopic, not sure.
I have a minimum of 10 TB of data I want to consolidate off of multiple USB drives to free them up. The data is almost exclusively static and rarely accessed, but I need to maintain it.
First thought - tape. As you also mention further down.
I suspect I will also want to have more resiliency at some point. (ie. raid 0 => raid 5 => raid 6)
As we are talking very large disks, when one breaks (as they will), you will be running degraded for hours and hours, during which time your data is highly exposed to another disk breaking. RAID6 will take care of that as will e.g. RAID15.
If I truly had confidence in this storage pool I might eliminate both copies of the data that is on USB drives currently. But even with raid 6, I think I would worry about a total LVM or Volume crash Or even a user error!
Tape backup is the only way to solve that.
Even though I know LVM and MDraid somewhat, I don't know if is "dynamic" and "reliable" enough for what I want.
It doesn't sound like you want "dynamic" at all :-) Both lvm and mdraid are very reliable technologies. I use both extensively.
Goals:
- Create a fileserver that I can add drives to from time to time and grow it's capacity. Probably 10TB drives so I don't have too many spindles in the mix. When bigger drives become available, I'd prefer to use them, so being stuck will all the same size drives is a negative.
LVM is the answer to that.
- Performance is non-critical. I've used LTO-4 tapes to do this in the past, but I hope online is a better choice now. With LTO-4, once I had a new data set (typically 100GB - 2TB) I would make a backup with tape and put it away for the time I needed to ensure I still had it. (Often years).
For critical email archiving, we keep three copies - two on tape in secure, geographically separate locations, one on-line. Unless you need quick (even if rare) access to the data, tape is the answer, especially as you're comfortable handling it. For uncritical daily backups and such, I use a plain fileserver, just some xTb drives in RAID1.
- Share the exported volumes with Windows PCs. (Not critical, but preferred)
So samba.
- have the ability to start the drive pool with a single drive and add to it over time
Uh, not sure you can go from zero redundancy to e.g. RAID5 or RAID6 just like that. Some hardware controllers do support it though.
- Allow the added drive to be either as SATA USB3.1
I guess you can do that.
- Allow the Raid "protection level" to be adjusted for a given volume from time to time.
Maybe LVM2 can do that I'm not sure. I wonder why you would want that.
I was actually planning to do this with Windows and its "Storage Spaces" solution. I just this afternoon put a new 10TB drive in a windows PC and added it to the "Storage Space" (like LVM).
But my reading says the "resilience level" of a volume has to be set at the time the volume is created. I can grow it later, but I can't change it from a raid 0, to a raid 1, to a raid 5, etc.
That is almost certainly correct. I doubt if you can do that with LVM either, but this would work with LVM: Monday - volume 'A': single drive with all data. (zero redundancy) Tuesday - volume 'B': add two drives in RAID1, migrate from 'A'. (redundancy increased by 1) Wednesday - volume 'C': add 2+1 (from A) drives in RAID5, migrate from 'B'. (no increased redundancy) Thursday - volume 'D': add 2+2(from B) drives in RAID6, migrate from 'C'. (redundancy increased by 1) -- Per Jessen, Zürich (8.1°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-11-19 00:05, Greg Freemyer wrote:
This email might be offtopic, not sure.
I have a minimum of 10 TB of data I want to consolidate off of multiple USB drives to free them up. The data is almost exclusively static and rarely accessed, but I need to maintain it.
... Personally I prefer a second set of hard disks to a raid (and rsync), unless high availability is a requirement. LVM allows growing the space. Personally I prefer separate directories, but that may not be to your liking. But if you use LVM and add hard disks, you increase the risk, so you need raid to compensate. There are data storage solutions out there. They spread the data over several different media according to the needed time of access. Some may be in fast hard disks, smallish, some on huge hard disks, some on tapes. Some need a human to switch modules. I know someone that is an expert on this (it is his job), perhaps he is reading. I'll ping him, just in case ;-) -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
On 2016-11-19 14:33, Carlos E. R. wrote:
On 2016-11-19 00:05, Greg Freemyer wrote:
I know someone that is an expert on this (it is his job), perhaps he is reading. I'll ping him, just in case ;-)
Well, he no longer reads this list, too busy, but he gave me some ideas, which I'll translate to English here. First, run away from complexities, the simplest solution can be the best. Don't be led by the bells and whistles you read on internet. He says he still prefers ZFS because it does CRC of data blocks on disk, so that it ensures that data is properly stored. Also it allows for deduplication. The snag is it uses a lot of RAM. Another question is when and how much do you intend to grow? Instead of LVM it can be worth to migrate the hard disk to another one, more modern at the time and bigger. Ie, buy 10 T now, perhaps 20 TB later. Perhaps SSD then. (re RAID) Another thing to take into account is a filesystem that allows to create only lost blocks. IOW, a hardware raid recreates the entire hard disk regardless of what is actually in use. 1 TB takes about 24 hours to replicate. So 10 days for a 10 TB disk. ZFS allows to rebuild only those blocks that contained data, thus faster than a traditional HW raid. Another thing to take into account is that if you use HW RAID and LVM on top, expanding it costs big money (also with ZFS), because you can not simply add a disk, you need an entire RAID. If you want something simple and fast; 2 * 10 TB HDs in a (cheap) server, and ZFS doing mirror (RAID 1) between both. When expanding: connect 2 new (bigger) hard disks in raid 1 and migrate the entire thing. That is what he said, quick translation. :-) -- Cheers/Saludos Carlos E. R. (testing openSUSE Leap 42.2, at Minas-Anor) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, Nov 23, 2016 at 9:33 PM, Carlos E. R. <robin.listas@telefonica.net> wrote:
On 2016-11-19 14:33, Carlos E. R. wrote:
On 2016-11-19 00:05, Greg Freemyer wrote:
I know someone that is an expert on this (it is his job), perhaps he is reading. I'll ping him, just in case ;-)
Well, he no longer reads this list, too busy, but he gave me some ideas, which I'll translate to English here.
First, run away from complexities, the simplest solution can be the best. Don't be led by the bells and whistles you read on internet.
He says he still prefers ZFS because it does CRC of data blocks on disk, so that it ensures that data is properly stored. Also it allows for deduplication. The snag is it uses a lot of RAM.
Another question is when and how much do you intend to grow? Instead of LVM it can be worth to migrate the hard disk to another one, more modern at the time and bigger. Ie, buy 10 T now, perhaps 20 TB later. Perhaps SSD then.
(re RAID) Another thing to take into account is a filesystem that allows to create only lost blocks. IOW, a hardware raid recreates the entire hard disk regardless of what is actually in use. 1 TB takes about 24 hours to replicate. So 10 days for a 10 TB disk. ZFS allows to rebuild only those blocks that contained data, thus faster than a traditional HW raid.
Another thing to take into account is that if you use HW RAID and LVM on top, expanding it costs big money (also with ZFS), because you can not simply add a disk, you need an entire RAID.
If you want something simple and fast; 2 * 10 TB HDs in a (cheap) server, and ZFS doing mirror (RAID 1) between both. When expanding: connect 2 new (bigger) hard disks in raid 1 and migrate the entire thing.
That is what he said, quick translation. :-)
Thanks Carlos Lots of good advice in there. I'm going to experimentmwith the Drobo I bought as a first effort. Should be in my hands in a week. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 24/11/2016 à 05:30, Greg Freemyer a écrit :
Lots of good advice in there. I'm going to experimentmwith the Drobo I bought as a first effort. Should be in my hands in a week.
and please report, the drobo seems very promising, but the modern versions are not cheap :-) thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thursday, November 24, 2016, jdd <jdd@dodin.org> wrote:
Le 24/11/2016 à 05:30, Greg Freemyer a écrit :
Lots of good advice in there. I'm going to experimentmwith the Drobo I bought as a first effort. Should be in my hands in a week.
and please report, the drobo seems very promising, but the modern versions are not cheap :-)
Agreed on the expense. I've wanted one for yeara, but couldn't justify the cost.. The unit I just bought I got for less than 10 percent of the new cost. But it seems to have all the features I was looking for. My biggest unanswered questions are: how big a drive (in TB) will it accept? How big of a logical volume can it build? The currently sold units only go to 64 TB volumes. That's enough for me, but not by a factor 10 as an example. If the 5 year old unit I juat bought can't go over 20 TB volumes, I will be dissapointed. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer wrote:
On Thursday, November 24, 2016, jdd <jdd@dodin.org> wrote:
Le 24/11/2016 à 05:30, Greg Freemyer a écrit :
Lots of good advice in there. I'm going to experimentmwith the Drobo I bought as a first effort. Should be in my hands in a week.
and please report, the drobo seems very promising, but the modern versions are not cheap :-)
Agreed on the expense. I've wanted one for yeara, but couldn't justify the cost.. The unit I just bought I got for less than 10 percent of the new cost. But it seems to have all the features I was looking for.
I would have stayed away from the black magic and just done it in Linux with LVM and mdraid, but let us know how you fare.
My biggest unanswered questions are: how big a drive (in TB) will it accept? How big of a logical volume can it build?
Surely the specs document that quite clearly? It's the typical bit of leverage that make people buy the bigger/next one. -- Per Jessen, Zürich (7.9°C) http://www.dns24.ch/ - your free DNS host, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Nov 24, 2016 at 12:29 PM, Per Jessen <per@computer.org> wrote:
Greg Freemyer wrote:
On Thursday, November 24, 2016, jdd <jdd@dodin.org> wrote:
Le 24/11/2016 à 05:30, Greg Freemyer a écrit :
Lots of good advice in there. I'm going to experimentmwith the Drobo I bought as a first effort. Should be in my hands in a week.
and please report, the drobo seems very promising, but the modern versions are not cheap :-)
Agreed on the expense. I've wanted one for yeara, but couldn't justify the cost.. The unit I just bought I got for less than 10 percent of the new cost. But it seems to have all the features I was looking for.
I would have stayed away from the black magic and just done it in Linux with LVM and mdraid, but let us know how you fare.
My biggest unanswered questions are: how big a drive (in TB) will it accept? How big of a logical volume can it build?
Surely the specs document that quite clearly? It's the typical bit of leverage that make people buy the bigger/next one.
When released in 2010 it was 2TB drives max, and the biggest volume was 12GB I think. In 2013, they released a firmware update to 4 TB / 16GB. I hope they have done another upgrade since then, but I haven't researched it as much as I could have. I didn't really expect to win the bid at $150.since they were over $2K new. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer wrote:
On Thu, Nov 24, 2016 at 12:29 PM, Per Jessen <per@computer.org> wrote:
Surely the specs document that quite clearly? It's the typical bit of leverage that make people buy the bigger/next one.
When released in 2010 it was 2TB drives max, and the biggest volume was 12GB I think. In 2013, they released a firmware update to 4 TB / 16GB.
We have a number of storage servers with that 2Tb/drive limitation too. No updated firmware available from 3ware, they would rather we upgraded the controllers :-) Instead I keep buying 2Tb drives, two every other month. I sometimes think the manufacturers ought to put a switch on drives to limit capacity to 2Tb, but sofar 2Tb drives have remained available.
I hope they have done another upgrade since then, but I haven't researched it as much as I could have. I didn't really expect to win the bid at $150.since they were over $2K new.
Supply and demand, in particular the latter. The other day I bought an HP DL380 G5 which would have been between 7 and 10K when new - 1Fr. (for spareparts). -- Per Jessen, Zürich (5.6°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 24/11/2016 à 17:18, Greg Freemyer a écrit :
The currently sold units only go to 64 TB volumes. That's enough for me, but not by a factor 10 as an example. If the 5 year old unit I juat bought can't go over 20 TB volumes, I will be dissapointed.
there is one on ebay now (germany) http://www.ebay.fr/itm/like/152326621289?dest=http%3A%2F%2Fwww.ebay.fr%2Fitm... but only usb2 (or Firewire), no metwork and according to amazon: https://www.amazon.com/Drobo-Beyond-FireWire-Storage-DR04DD10/dp/B001CZ9ZEE only 16Tb max see all here https://en.wikipedia.org/wiki/Drobo jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-11-24 02:33, Carlos E. R. wrote:
On 2016-11-19 14:33, Carlos E. R. wrote:
On 2016-11-19 00:05, Greg Freemyer wrote:
I know someone that is an expert on this (it is his job), perhaps he is reading. I'll ping him, just in case ;-)
Well, he no longer reads this list, too busy, but he gave me some ideas, which I'll translate to English here.
First, run away from complexities, the simplest solution can be the best. Don't be led by the bells and whistles you read on internet.
He says he still prefers ZFS because it does CRC of data blocks on disk, so that it ensures that data is properly stored. Also it allows for deduplication. The snag is it uses a lot of RAM.
Just to add a note of caution from my own experience. Our sysadmins like ZFS so when I needed a new data store to replace some old Reiser 3 and XFS ones, they set up a ZFS pool. I tested it and it had absolutely dreadful performance, unusable. They bought some SSDs to add to the pool for the metadata and they upgraded ZFS. It now works acceptably but I don't find it as easy to manage (but maybe that's just lack of familiarity?) Cheers, Dave -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-11-24 12:14, Dave Howorth wrote:
On 2016-11-24 02:33, Carlos E. R. wrote:
On 2016-11-19 14:33, Carlos E. R. wrote:
On 2016-11-19 00:05, Greg Freemyer wrote:
I know someone that is an expert on this (it is his job), perhaps he is reading. I'll ping him, just in case ;-)
Well, he no longer reads this list, too busy, but he gave me some ideas, which I'll translate to English here.
First, run away from complexities, the simplest solution can be the best. Don't be led by the bells and whistles you read on internet.
He says he still prefers ZFS because it does CRC of data blocks on disk, so that it ensures that data is properly stored. Also it allows for deduplication. The snag is it uses a lot of RAM.
Just to add a note of caution from my own experience. Our sysadmins like ZFS so when I needed a new data store to replace some old Reiser 3 and XFS ones, they set up a ZFS pool. I tested it and it had absolutely dreadful performance, unusable. They bought some SSDs to add to the pool for the metadata and they upgraded ZFS. It now works acceptably but I don't find it as easy to manage (but maybe that's just lack of familiarity?)
I'm also not familiar myself with ZFS, but my correspondent is. But remember he said that ZFS consumes lots of RAM. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
On 2016-11-24 12:43, Carlos E. R. wrote:
On 2016-11-24 12:14, Dave Howorth wrote:
On 2016-11-24 02:33, Carlos E. R. wrote:
On 2016-11-19 14:33, Carlos E. R. wrote:
On 2016-11-19 00:05, Greg Freemyer wrote:
I know someone that is an expert on this (it is his job), perhaps he is reading. I'll ping him, just in case ;-)
Well, he no longer reads this list, too busy, but he gave me some ideas, which I'll translate to English here.
First, run away from complexities, the simplest solution can be the best. Don't be led by the bells and whistles you read on internet.
He says he still prefers ZFS because it does CRC of data blocks on disk, so that it ensures that data is properly stored. Also it allows for deduplication. The snag is it uses a lot of RAM.
Just to add a note of caution from my own experience. Our sysadmins like ZFS so when I needed a new data store to replace some old Reiser 3 and XFS ones, they set up a ZFS pool. I tested it and it had absolutely dreadful performance, unusable. They bought some SSDs to add to the pool for the metadata and they upgraded ZFS. It now works acceptably but I don't find it as easy to manage (but maybe that's just lack of familiarity?)
I'm also not familiar myself with ZFS, but my correspondent is. But remember he said that ZFS consumes lots of RAM.
Yes, the server has 64 GB of main memory, so I don't think that was the problem :) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (10)
-
Anton Aylward
-
Carlos E. R.
-
Dave Howorth
-
Dave Howorth
-
ellanios82
-
Florian Gleixner
-
Greg Freemyer
-
jdd
-
Lew Wolfgang
-
Per Jessen