You're welcome. On Mon, 28 Sep 2015, jdd wrote:
As far as I understand, any raid is software driven (any computer hardware is software driven :-). All is to know where the software is run.
Hmm, could be, good point :). So the only difference is an extra processing chip.
* fake raid are build in your own computer. Depending of the make, part of the software is in rom (possibly in the bios), part in the OS that may have to be windows (and what version?) to use the maker's driver. Linux may or may not see this, probably not.
This accurately describes my situation, yes.
* soft raid in linux is a very well implemented software, kernel module + user space commands. Of course it uses processor power, but who needs really all the time our processor power? I guess the penalty is little. But I have no idea of how windows manage this, probably not that well.
That is why I was questioning the penalty. Perhaps for very high performance systems with constant load then perhaps obviously the advantages matter. But for my use, I can ensure you that to date at least I would have gained absolutely zero benefit from a real hardware raid solution in terms of performance. So that means a "real raid" where the mapping of disks/sectors to the logical entity was done in ROM/raid chip.
That said, I have no continuous availability necessity in my use, so I find raid a waste or data space and no more use it...
I hate SSD. I also don't like 3.5" HDD much in my system. Call me crazy, but I use 2x 2.5" HDD in Raid 0 (stripe). Stripe has little performance penalty (for seeks) and you don't lose any capacity because you keep the double size. But with software raid it gets better because you can "interleave" your partitions. You can put stripe raid where it matters and put mirror raid when you want safety and a mirror raid still has higher read or access speeds than non-raid, I believe. Probably, perhaps depending on the implementation, you get a nice increase in random read times. It is probably cheaper than SSD (you can get 2x 1TB disk for instance) you have a load of storage and it is kinda cute to do it. You have higher read/write throughput when you want that, and you don't really have to care much about "continuous availability" in that case, in practice probably you would run a higher risk if you use stripe partitions. But I have a debian system, I just don't know how to maintain it, that will boot fine from either disk it will just have a degraded array and the stripe sets will be unavailable. If one disk is missing. You can put LVM in the dmx if you want (dm0, dm1, etc.) and then you have the exact same interface as if you were using regular non-raid LVM. I consider the ability to 'interleave' raid partitions to be very nice and interesting and a great help and power thing. Not sure how to combine it in a real system though, a normal system. You'd have to put /(root) on mirror if you wanted that. But say you do video editing or compiling, or whatever, just put it on stripe! :p. Anyway.
good discussion, anyway, but don't seems to show any new development about raid
Maybe I just showed you something :p. It's just my own... amateur testing, so to speak. ;-). Bye. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org