
i know for a fact, because i've done it many times, that i can take software raid drives from one system and put them in another without any hassle at all.
Thats one of the main reasons that I had chosen a software raid setup before. In fact the original system was first setup on a HP Proliant G4 or G5(?) with Linux software raid. That was moved to another host zero problem, just moved the drives across and reboot. It was thought that it would be easy to just move to a HP Proliant G7. Daniel.
have you, or anyone else, actually done that with, say, a raid array from HP being moved to an adaptec controller? or from any proprietary HW RAID card to another brand? in my experience it's usually not even possible when when moving to a newer model of the same brand,
see also my last message on flexibility advantages of SW RAID over HW RAID.
If you buy a HP server to run something important that needs little down-time then you probably have just that. If your HP server doesn't need such support guarantees then you can probably deal with a delay in getting a new RAID card.
if you don't need such support guarantees, then why even use a brand-name server?
you get better performance and much better value for money with non-branded server hardware that you either build yourself or pay one of the specialist server companies to build for you.
that still doesn't make hardware raid a better or even good solution, just a tolerable one.
for raid-1 or 10, software raid beats the hell out of HW raid,
For RAID-5 and RAID-6 a HP hardware RAID with battery backed write-back cache vastly outperforms any pure software RAID implementation.
i used to have exactly the same opinion - battery-backed or flash-based write caches meant that HW RAID was not only much better but absolutely essential for RAID-5 or RAID-6, because write performance on RAID-5/6 really sucks without write caching.
but now ZFS can use an SSD (or other fast block device) as ZIL, and kernel modules like bcache[1] and facebook's flashcache[2] can provide the same kind of caching using any fast block device for any filesystem.
so, that one advantage is gone, and has been for several years now.
[1] http://en.wikipedia.org/wiki/Bcache [2] http://en.wikipedia.org/wiki/Flashcache
at the moment, the fastest available block devices are PCI-e SSDs (or PCI-e battery-backed RAMdisks). in the not too distant future, they'll be persistent RAM devices that run at roughly the same speed as current RAM. Linux Weekly News[3] has had several articles on linux support for them over the last few years. ultimately, i expect even bulk storage will be persistent RAM devices but initially it will be cheaper to have persistent RAM caching in front of magnetic disks or SSDs.
[3] search for 'NVM' at https://lwn.net/Search/
craig
participants (1)
-
Daniel Jitnah