
Hi all, I have a system running on a Linux *software raid* 10 setup on Ubuntu 14.04. This system is running very well. However for some reasons this system needs to be moved from one (hardware) host to another host (a HP ML110 G7 Proliant Server). I had in mind to just take the drives from one machine to another. (The actual workload is in a VM hosted on a very basic KVM installation. so as long as I get a base host running, I am happy!) The HP Proliant though has hardware raid, (and I don't think I can disable it. The original host does not have raid.) Q: Can I install software raid on top of a hardware raid system? if Yes, how would I set the hardware raid? in effect I need the software raid to still see 4 hard drives? Thanks Daniel.

On Sun, Apr 26, 2015 at 02:28:07PM +1000, Daniel Jitnah wrote:
Can I install software raid on top of a hardware raid system?
yes. you do that by ignoring or disabling the HW raid features of the card.
if Yes, how would I set the hardware raid? in effect I need the software raid to still see 4 hard drives?
you should be able to tell the HP raid card to use the drives as JBOD or, at worst, set up each drive as a degraded raid-1 drive by itself....then you can run linux software raid on top of that. try googling for the brand and model of the raid card + "linux software raid" i'd take a backup of the drives before doing anything. if neither of the above work, you could try replacing the raid card with a non-raid controller for that kind of drive - what kind of drives are they? scsi? sata? maybe the m/b has non-raid drive ports as well as the raid card. craig -- craig sanders <cas@taz.net.au>

On Sun, 26 Apr 2015 02:53:40 PM Craig Sanders wrote:
if Yes, how would I set the hardware raid? in effect I need the software raid to still see 4 hard drives?
you should be able to tell the HP raid card to use the drives as JBOD or, at worst, set up each drive as a degraded raid-1 drive by itself....then you can run linux software raid on top of that.
Last time I tried this with a HP server there was no way to run drives without having the RAID header. The RAID header tells the RAID controller to treat the disk as a JBOD, but it makes the disk slightly smaller and at a slight offset.
if neither of the above work, you could try replacing the raid card with a non-raid controller for that kind of drive - what kind of drives are they? scsi? sata? maybe the m/b has non-raid drive ports as well as the raid card.
If the ML series of HP servers is the same as the DL series then you have a RAID card that exactly fits and cables that exactly cover the distance to the drives. The easiest option is to just use the hardware RAID. Unless you are going to use ZFS or BTRFS then the HP hardware RAID is a better option. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Sun, Apr 26, 2015 at 04:04:02PM +1000, Russell Coker wrote:
On Sun, 26 Apr 2015 02:53:40 PM Craig Sanders wrote:
you should be able to tell the HP raid card to use the drives as JBOD or, at worst, set up each drive as a degraded raid-1 drive by itself....then you can run linux software raid on top of that.
Last time I tried this with a HP server there was no way to run drives without having the RAID header. The RAID header tells the RAID controller to treat the disk as a JBOD, but it makes the disk slightly smaller and at a slight offset.
i.e. it can't be done without wiping the original drives. it might be possible to shrink the filesystem and partitions on the orig drives and use gparted to move the partition(s) to make room for the raid offset....but that's a lot of stuffing around - would probably be much less hassle to backup, repartition with the HW raid, and restore. stupid crap like this is one of the reasons why HW raid cards should be avoided. this is an anti-feature that serves onlY HP by locking in customers to their products.
Unless you are going to use ZFS or BTRFS then the HP hardware RAID is a better option.
i disagree with the last part of that. the only time hardware raid is even a reasonable option is when you want to use RAID5/6 *AND* you have a support contract with the HW vendor *AND* can afford to have a second identical raid card sitting idle. that still doesn't make hardware raid a better or even good solution, just a tolerable one. for raid-1 or 10, software raid beats the hell out of HW raid, and ZFS mirrored pairs adds error-checking and correction of data as well as snapshotting and many other useful features. for RAID-5 or RAID-6 linux software raid is better, and ZFS RAID-Z is far superior for the same reasons that ZFS mirroring is superior to RAID-1. IMO, the only reason to use HW raid is if you have no other choice. craig -- craig sanders <cas@taz.net.au>

On Sun, 26 Apr 2015 04:30:31 PM Craig Sanders wrote:
Last time I tried this with a HP server there was no way to run drives without having the RAID header. The RAID header tells the RAID controller to treat the disk as a JBOD, but it makes the disk slightly smaller and at a slight offset.
i.e. it can't be done without wiping the original drives. it might be possible to shrink the filesystem and partitions on the orig drives and use gparted to move the partition(s) to make room for the raid offset....but that's a lot of stuffing around - would probably be much less hassle to backup, repartition with the HW raid, and restore.
Yes.
stupid crap like this is one of the reasons why HW raid cards should be avoided. this is an anti-feature that serves onlY HP by locking in customers to their products.
I think that HP RAID supports a purported industry standard for such things, so it's not just them. Also if you have the RAID metadata at the front of the disk then a RAID volume can't be accidentally mounted as non-RAID. In the early days of Linux Software RAID it was a feature that you could mount half of a RAID-1 array as a non-RAID, but that had serious potential for data loss if you made a mistake. Now Linux Software RAID usually defaults to the version 1.2 format which has the metadata at the start. So your criticism of HP RAID can be applied to Linux Software RAID.
Unless you are going to use ZFS or BTRFS then the HP hardware RAID is a better option.
i disagree with the last part of that. the only time hardware raid is even a reasonable option is when you want to use RAID5/6 *AND* you have a support contract with the HW vendor *AND* can afford to have a second identical raid card sitting idle.
If you buy a HP server to run something important that needs little down-time then you probably have just that. If your HP server doesn't need such support guarantees then you can probably deal with a delay in getting a new RAID card.
that still doesn't make hardware raid a better or even good solution, just a tolerable one.
for raid-1 or 10, software raid beats the hell out of HW raid,
For RAID-5 and RAID-6 a HP hardware RAID with battery backed write-back cache vastly outperforms any pure software RAID implementation.
and ZFS mirrored pairs adds error-checking and correction of data as well as snapshotting and many other useful features.
for RAID-5 or RAID-6 linux software raid is better, and ZFS RAID-Z is far superior for the same reasons that ZFS mirroring is superior to RAID-1.
I agree that ZFS features are good and I've run a HP server with it's RAID configured as a JBOD for ZFS. But I've also run HP RAID-6 and found it to be dramatically better than Linux Software RAID. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Sun, Apr 26, 2015 at 10:16:44PM +1000, Russell Coker wrote:
stupid crap like this is one of the reasons why HW raid cards should be avoided. this is an anti-feature that serves onlY HP by locking in customers to their products.
I think that HP RAID supports a purported industry standard for such things, so it's not just them. Also if you have the RAID metadata at the front of the disk then a RAID volume can't be accidentally mounted as non-RAID. In the early days of Linux Software RAID it was a feature that you could mount half of a RAID-1 array as a non-RAID, but that had serious potential for data loss if you made a mistake. Now Linux Software RAID usually defaults to the version 1.2 format which has the metadata at the start.
So your criticism of HP RAID can be applied to Linux Software RAID.
i know for a fact, because i've done it many times, that i can take software raid drives from one system and put them in another without any hassle at all. have you, or anyone else, actually done that with, say, a raid array from HP being moved to an adaptec controller? or from any proprietary HW RAID card to another brand? in my experience it's usually not even possible when when moving to a newer model of the same brand, see also my last message on flexibility advantages of SW RAID over HW RAID.
If you buy a HP server to run something important that needs little down-time then you probably have just that. If your HP server doesn't need such support guarantees then you can probably deal with a delay in getting a new RAID card.
if you don't need such support guarantees, then why even use a brand-name server? you get better performance and much better value for money with non-branded server hardware that you either build yourself or pay one of the specialist server companies to build for you.
that still doesn't make hardware raid a better or even good solution, just a tolerable one.
for raid-1 or 10, software raid beats the hell out of HW raid,
For RAID-5 and RAID-6 a HP hardware RAID with battery backed write-back cache vastly outperforms any pure software RAID implementation.
i used to have exactly the same opinion - battery-backed or flash-based write caches meant that HW RAID was not only much better but absolutely essential for RAID-5 or RAID-6, because write performance on RAID-5/6 really sucks without write caching. but now ZFS can use an SSD (or other fast block device) as ZIL, and kernel modules like bcache[1] and facebook's flashcache[2] can provide the same kind of caching using any fast block device for any filesystem. so, that one advantage is gone, and has been for several years now. [1] http://en.wikipedia.org/wiki/Bcache [2] http://en.wikipedia.org/wiki/Flashcache at the moment, the fastest available block devices are PCI-e SSDs (or PCI-e battery-backed RAMdisks). in the not too distant future, they'll be persistent RAM devices that run at roughly the same speed as current RAM. Linux Weekly News[3] has had several articles on linux support for them over the last few years. ultimately, i expect even bulk storage will be persistent RAM devices but initially it will be cheaper to have persistent RAM caching in front of magnetic disks or SSDs. [3] search for 'NVM' at https://lwn.net/Search/ craig -- craig sanders <cas@taz.net.au>

On Mon, 27 Apr 2015 10:18:19 AM Craig Sanders wrote:
but now ZFS can use an SSD (or other fast block device) as ZIL, and kernel modules like bcache[1] and facebook's flashcache[2] can provide the same kind of caching using any fast block device for any filesystem.
so, that one advantage is gone, and has been for several years now.
[1] http://en.wikipedia.org/wiki/Bcache [2] http://en.wikipedia.org/wiki/Flashcache
I recommend that JBOD be used for ZFS on servers, but if not using ZFS then hardware RAID gives benefits. It's very rare that I see any mention of Bcache except in the context of data loss, it's known to lose data with some recent versions of BTRFS. I recommend not using bcache on servers at this time, use it on test systems that have good backups. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Don't forget the other benefit of hardware raid is being able to boot from a degraded array. EG if you use software raid and your grub is on a failed drive, you will have to manually force the BIOS to boot from the other (working) drive or it will just hang on boot. For this reason it's worth considering having at least your grub loader on a RAID partition that uses the hardware RAID controller, even if the rest of your drives use software RAID. Or being aware of this limitation, and being prepared to manually boot or boot grub from an external device if your raid is degraded. - Noah On 26 April 2015 at 14:28, Daniel Jitnah <djitnah@greenwareit.com.au> wrote:
Hi all,
I have a system running on a Linux *software raid* 10 setup on Ubuntu 14.04. This system is running very well.
However for some reasons this system needs to be moved from one (hardware) host to another host (a HP ML110 G7 Proliant Server). I had in mind to just take the drives from one machine to another. (The actual workload is in a VM hosted on a very basic KVM installation. so as long as I get a base host running, I am happy!)
The HP Proliant though has hardware raid, (and I don't think I can disable it. The original host does not have raid.)
Q:
Can I install software raid on top of a hardware raid system? if Yes, how would I set the hardware raid? in effect I need the software raid to still see 4 hard drives?
Thanks Daniel.
_______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main

On Mon, Apr 27, 2015 at 09:24:47AM +1000, Noah O'Donoghue wrote:
Don't forget the other benefit of hardware raid is being able to boot from a degraded array.
it's an advantage of software raid too. i've been doing that from software raid for years. in fact, it's one of the reasons i usually have a RAID-1 /boot partition no matter how the rest of the drive space is being used (RAID, ZFS, btrfs, whatever) that flexibility is one of the benefits of software raid - i can decide on a per-partition basis (rather than per-disk) what kind, if any, of RAID i'm going to have. e.g. my current main system has a matched pair of 256GB SSDs - using RAID-1 for / and /boot, and no raid for the swap, ZIL, and L2ARC partitions (RAID makes no sense for those). the bulk data storage is on two 4x1GB ZFS RAID-Z1 pools (mounted as /export and /backup - the latter takes rsync and 'zfs send' backups from all systems on my home network). this wouldn't be possible on hardware raid. (i haven't upgraded the RAID-Z pools to larger/newer drives because the current capacity is adequate for my needs and i'm holding out for SSDs to get *much* cheaper. with luck, the current drives will last until 1 or 2 TB SSDs are affordable to buy in quantities of 4+, or for 4+TB SSDs to be affordable in pairs)
EG if you use software raid and your grub is on a failed drive, you will have to manually force the BIOS to boot from the other (working) drive or it will just hang on boot.
this might be a problem on ancient hardware, but it's not a problem on anything reasonably modern (for at least the last 5+ years)
For this reason it's worth considering having at least your grub loader on a RAID partition that uses the hardware RAID controller, even if the rest of your drives use software RAID. Or being aware of this limitation, and being prepared to manually boot or boot grub from an external device if your raid is degraded.
or just grub-install to all drives and configure the bios to attempt to boot from each drive in turn. i haven't seen a BIOS (incl. UEFI) for years that doesn't allow you to specify a boot order. craig -- craig sanders <cas@taz.net.au>

On 27 April 2015 at 09:48, Craig Sanders <cas@taz.net.au> wrote:
or just grub-install to all drives and configure the bios to attempt to boot from each drive in turn. i haven't seen a BIOS (incl. UEFI) for years that doesn't allow you to specify a boot order.
How does this work when the drive responds initially but then has IO errors? I can understand if the drive presents as offline or fails completely to read the first sector, but once booting starts control has passed to whatever code is run on that first drive. There's nothing running to timeout the boot and proceed to the next drive. Whereas a RAID controller is aware of both drives and will just issue a read from the other drive, a short period after the read doesn't return on the first drive.. In my experience as IT support I've seen heaps of OS drives that will proceed past a linux or windows splash screen, then hang later on in the boot process..

On Mon, 27 Apr 2015 at 09:49 Craig Sanders <cas@taz.net.au> wrote:
or just grub-install to all drives and configure the bios to attempt to boot from each drive in turn. i haven't seen a BIOS (incl. UEFI) for years that doesn't allow you to specify a boot order.
Just watch out, sometimes grub-install doesn't do what you would expect. e.g. sometimes an install on sdb can depend on sda. e.g. recently I wanted to replace my disks with bigger disks, so I ran: grub-install /dev/sda # yes, this was redundant grub-install /dev/sdb Then removed sda, replaced with new disk, and attempted to boot from sdb. For some reason, grub insisted on loading itself from sda, not sdb, which of course didn't work, and as I result I only got the restricted rescue mode grub command line. Which I am totally unfamiliar with, and I couldn't do a Google search easily without the server which I had just taken offline. Fortunately I booted a rescue system via CD (which I had prepared earlier just in case) and fixed it. (This was with a Debian wheezy system) I am not sure of the boot sequence used by grub2, so not sure exactly what went wrong here. Maybe it was trying to load files from /boot which is software raid1 on both disks.

"Noah O'Donoghue" <noah.odonoghue@gmail.com> writes:
Don't forget the other benefit of hardware raid is being able to boot from a degraded array.
EG if you use software raid and your grub is on a failed drive, you will have to manually force the BIOS to boot from the other (working) drive or it will just hang on boot.
Or get a better bootloader. (Hint: extlinux.)

How does extlinux solve the problem? It just seems like another bootloader to me. Grub already has support for RAID, /boot on RAID, /boot on encrypted/raided/LVM... On 27 April 2015 at 11:23, Trent W. Buck <trentbuck@gmail.com> wrote:
"Noah O'Donoghue" <noah.odonoghue@gmail.com> writes:
Don't forget the other benefit of hardware raid is being able to boot from a degraded array.
EG if you use software raid and your grub is on a failed drive, you will have to manually force the BIOS to boot from the other (working) drive or it will just hang on boot.
Or get a better bootloader. (Hint: extlinux.)
_______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main

"Noah O'Donoghue" <noah.odonoghue@gmail.com> writes:
How does extlinux solve the problem? It just seems like another bootloader to me.
Grub already has support for RAID, /boot on RAID, /boot on encrypted/raided/LVM...
extlinux doesn't have a devices.map. IOW if the first disk dies, the second one doesn't load the MBR then go "oh shit, I can't find the stage 1.5 on the second disk". Also the MBR is completely static, so when grub helpfully installs to the MBR of your USB installer instead of the actual hard disk you told it to use, you can just cat mbr.bin >/dev/sda to fix it.

Daniel Jitnah <djitnah@greenwareit.com.au> writes:
Hi all,
I have a system running on a Linux *software raid* 10 setup on Ubuntu 14.04. This system is running very well.
However for some reasons this system needs to be moved from one (hardware) host to another host (a HP ML110 G7 Proliant Server). I had in mind to just take the drives from one machine to another. (The actual workload is in a VM hosted on a very basic KVM installation. so as long as I get a base host running, I am happy!)
The HP Proliant though has hardware raid, (and I don't think I can disable it. The original host does not have raid.)
[I don't remember if "Proliant" is rackmount. If not, this anecdote is probably useless...] IIRC I had a HP G8 server, and I bought a "passthrough" HBA for it. That let the OS see the raw drives, so Debian could do both SMART monitoring and RAID. (i.e. no HW raid at all.) I ran into two problems: 1. about half the back panel was part of the default RAID HBA, which the passthrough HBA didn't replace. Yay, broken airflow. 2. there were two 4xSATA cables between the hotswap bays at the front, and the HBA at the back. Because the passthrough HBA was smaller, neither cable reached. I "fixed" that by swapping the cables over (so ONE reached), and saying "this is now a 4-disk server". Since the passthrough HBA was (IIRC) a first-party HP part, I was *not* impressed. PS: I think we still used HP-branded drives, so I dunno if the passthrough HBA avoided that "feed me HP drives or else" thing.

On Mon, 27 Apr 2015 11:30:02 +1000 trentbuck@gmail.com (Trent W. Buck) wrote:
Daniel Jitnah <djitnah@greenwareit.com.au> writes:
Hi all,
I have a system running on a Linux *software raid* 10 setup on Ubuntu 14.04. This system is running very well.
However for some reasons this system needs to be moved from one (hardware) host to another host (a HP ML110 G7 Proliant Server). I had in mind to just take the drives from one machine to another. (The actual workload is in a VM hosted on a very basic KVM installation. so as long as I get a base host running, I am happy!)
The HP Proliant though has hardware raid, (and I don't think I can disable it. The original host does not have raid.)
[I don't remember if "Proliant" is rackmount. If not, this anecdote is probably useless...]
Some models are, some are not. The particluar one used here, ML110 G7 is not. Cheers Daniel.
IIRC I had a HP G8 server, and I bought a "passthrough" HBA for it.
participants (7)
-
Brian May
-
Craig Sanders
-
Dan062
-
Daniel Jitnah
-
Noah O'Donoghue
-
Russell Coker
-
trentbuck@gmail.com