
On Mon, Jun 24, 2013 at 07:37:56PM +1000, Russell Coker wrote:
On Mon, 24 Jun 2013, Craig Sanders <cas@taz.net.au> wrote:
if you can, use the third SATA port for the RAID-Z array as well - RAID-Z gets better performance when the number of data disks in an array is a power of 2 (e.g. 4 data disks + 1 parity for RAID-Z1, or 4 data disks + 2 parity for RAID-Z2).
The SATA disks will just be for booting. They won't have ZFS because root on ZFS was way too much pain to even consider last time I looked into it. Also I think we are a long way from having ZFS root be reliable enough that I would even consider using it on a remote system with no other forms of boot available.
i made no mention of having root on ZFS. that's a completely unrelated topic. I was pointing out that there is a small but noticable performance benefit with ZFS RAID-Z arrays if you have a power-of-two number of *data* disks - e.g. 2, 4, or 8 data disks. with 5 SAS ports, you can have 4 data disk (yes, a power of 2) and 1 parity for RAID-Z1. Or you can have three data disks (NOT a power of 2) and 2 parity disks for RAID-Z3. hence, if you want RAIDZ2 with four data disks, then 4 SAS data + 1 SAS parity + 1 SATA parity would achieve that. my own raidz1 pools have only 4 drives each, 3 data + 1 parity - but I made them before I knew about the power-of-two thing, and I'm using a pair of SATA 6Gbps OCZ Vector SSDs for my boot drive, with partitions for mdadm RAID-1 rootfs (xfs) and /boot (ext2), mirrored 4G ZIL, and 2 x 50G L2ARC.
Performance doesn't matter. I'm looking at replacing a system with a pair of 80G IDE disks in a software RAID-1 which is giving more than enough performance.
RAID-1 typically performs much better than RAID-5/6 or RAID-Z, especially on writes, and switching to RAID5/6/Z can result in a surprisingly unpleasant drop in performance. (OTOH, ancient IDE drives probably aren't very fast....but i'd still expect a RAID-1 of them to get better write speed than even modern non-SSD disks in RAID5/RAIDZ) BTW, a secondary reason to use an SSD as an L2ARC cache is that L2ARC is additional to the RAM-based ARC - exactly what you need if you want to enable de-duping, which can require quite large amounts of ARC/L2ARC. de-duplication is very useful if you have a large number of VM zvols (or zfs filesystems for container VMs) that are mostly similar. e.g. all running the same OS, or all created by cloning from a snapshot of a "base" VM
can you get the server to pxe boot?
Thanks for the suggestion, I'll give that a go. I had it working a few years ago so I'm sure I can do it again.
i see in your later reply that you got this working - cool, good.
have you tried just putting a regular bootable disk in?
Yes, then the CCISS BIOS tells me that it found no RAID array and will default to RAID-5 if I don't say anything. Afterwards I configured it for a single disk RAID-0 and one of those options (auto RAID-5 or manual RAID-0) wiped the MBR along the way.
nasty. avoiding stupid crap like that is a big part of the reason why why i prefer dumb HBA cards like my IBM M1015 or LSI 9211-8i SAS cards. they don't do things like that, and they're a lot cheaper (it's hard to beat the M1015, an IBM card using the LSI SAS2008 controller - 8 SAS 6Gbps ports for under $100 new on ebay). smart raid cards are pointless if you intend to run software-raid like mdadm, LVM, ZFS or btrfs anyway - you want just dumb ports or JBOD for those, not hardware raid. useful info on lots of different models and brands here: http://forums.servethehome.com/raid-controllers-host-bus-adapters/599-lsi-ra... craig -- craig sanders <cas@taz.net.au>