
I've got a few of my systems (including some with data that is important to me) using BTRFS now. It's been going well and one of my systems has had BTRFS for /home for ages (maybe a year) with no problems in recent times (some past problems when running out of space). One of my clients needs to reliably store terabytes of data which is mostly comprised of data files in the 10MB - 15MB size range. The data files will almost never be re-written and I anticiapte that the main bottleneck will be the latency of NFS and other network file sharing protocols. I would hope that saturating a GigE network when sending 10MB data files from SATA disks via NFS, AFS, or SMB wouldn't be a technical challenge. It seems that BTRFS is the way of the future. But it's still rather new and the lack of RAID-5 is a serious issue when you need to store 10TB with today's technology (that would be 8*3TB disks for RAID-10 vs 5*3TB disks for RAID-5). ZFS seems to be a lot more complex than BTRFS. While having more features is a good thing (BTRFS seems to be missing some sysadmin friendly features) complexity means more testing and more potential for making mistakes. Of course it might turn out that RAID-5 is the killer issue. Servers start becoming a lot more expensive if you want more than 8 disks and even 6 disks is a significant price point. An 8 disk RAID-5 gives something like 21TB usable space vs 12TB on a RAID-10 and a 6 disk RAID-5 gives about 15TB vs 9TB on a RAID-10. Anything else I should consider? -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 16/04/2012, at 20:12, Russell Coker <russell@coker.com.au> wrote:
I've got a few of my systems (including some with data that is important to me) using BTRFS now. It's been going well and one of my systems has had BTRFS for /home for ages (maybe a year) with no problems in recent times (some past problems when running out of space).
One of my clients needs to reliably store terabytes of data which is mostly comprised of data files in the 10MB - 15MB size range. The data files will almost never be re-written and I anticiapte that the main bottleneck will be the latency of NFS and other network file sharing protocols. I would hope that saturating a GigE network when sending 10MB data files from SATA disks via NFS, AFS, or SMB wouldn't be a technical challenge.
It seems that BTRFS is the way of the future. But it's still rather new and the lack of RAID-5 is a serious issue when you need to store 10TB with today's technology (that would be 8*3TB disks for RAID-10 vs 5*3TB disks for RAID-5).
ZFS seems to be a lot more complex than BTRFS. While having more features is a good thing (BTRFS seems to be missing some sysadmin friendly features) complexity means more testing and more potential for making mistakes.
Of course it might turn out that RAID-5 is the killer issue. Servers start becoming a lot more expensive if you want more than 8 disks and even 6 disks is a significant price point. An 8 disk RAID-5 gives something like 21TB usable space vs 12TB on a RAID-10 and a 6 disk RAID-5 gives about 15TB vs 9TB on a RAID-10.
Anything else I should consider?
Not that I've got anything to add re ZFS vs BTRFS having no specialist knowledge either way, but in other posts haven't you advocated for RAID-6 over RAID-5? Or is this something mandated on the client side?

On Mon, 16 Apr 2012, Colin Fee <tfeccles@gmail.com> wrote:
Of course it might turn out that RAID-5 is the killer issue. Servers start becoming a lot more expensive if you want more than 8 disks and even 6 disks is a significant price point. An 8 disk RAID-5 gives something like 21TB usable space vs 12TB on a RAID-10 and a 6 disk RAID-5 gives about 15TB vs 9TB on a RAID-10.
Anything else I should consider?
Not that I've got anything to add re ZFS vs BTRFS having no specialist knowledge either way, but in other posts haven't you advocated for RAID-6 over RAID-5? Or is this something mandated on the client side?
If you use Linux Software RAID-6 then reconstruction apparently is not based on checking both sets of checksums but is rather just regenerating checksums based on the available data. So RAID-6 covers you for the case when two disks entirely die, but that is rare - it's still something you want coverage from but it doesn't give the potential benefits. I have no reason to believe that any other RAID system which still conforms to the basic RAID-6 does something better although I acknowledge that there are lots of implementations that aren't well documented so anything is possible. http://en.wikipedia.org/wiki/Zfs If you use ZFS with RAID-5 it will check the hashes on every block and regenerate things if they don't match. Also it's possible to go back in time and get an earlier copy of the data if there is a corrupted block in the latest copy and no redundancy (see the Wikipedia page for more info). So if you compare Linux Software RAID-5 which only properly copes with a disk entirely dying or returning read errors to ZFS then ZFS wins in the following situations: 1) A disk entirely dies (or is being replaced due to sporadic errors) and another disk has a single error during recovery. ZFS can flag an error on RAID-5 and allow you to get an earlier version. Linux software RAID just loses and leaves corruption for a fsck or data file scrub by an application. 2) 2 disks in a RAID-5 have a few read errors - a reasonably common failure case as most drive failures in production are based on some read failures not a total death. Linux software RAID fails, it kicks out one disk and then you lose when the second disk has a read error. ZFS SHOULD just read from the other disks in the stripe for each error (which is detected by a hash mismatch) and reconstruct the data. NB I've only seen two disks in a RAID set fail with RAID-1, and Linux software RAID lost then. 3) A disk returns corrupt data for any reason. Linux software RAID-6 deals with case 1. It also deals with case 2 although if you suddenly get a third disk giving a few read errors (which could happen due to heat) then you lose. In theory a ZFS RAID-5 (AKA RAID-Z) could cope better with some failure conditions than a Linux Software RAID-6! That said, ZFS supports RAID-6 AKA RAID-Z2. Given the prices of 3TB disks and the fact that reasonably affordable servers can handle 8 disks which allows 18TB of RAID-6 storage it seems like a RAID-Z2 with ZFS is clearly a better choice for most uses (the copy on write feature of ZFS apparently removes the worst performance problems of RAID-5 and RAID-6). Anyway in my previous message I just wasn't really concerned with RAID-5 vs RAID-6. As BTRFS supports neither and ZFS supports both and as they both have very similar amounts of usable capacity for the 8 disk case it's not an issue at this stage of planning. But I think that another general discussion of RAID technology at this time is a good thing so your question is good and deserved a long answer. As for my client, I will give them some options with prices and ask them how much they want to pay more for reliability. I expect that they will pay for RAID-6, not because of some sort of business analysis of risk (which they can't do), but because it doesn't cost much and it would really suck to have some down-time and data loss due to saving such a small amount of money and disk space. http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery As an aside the above page about giving recovery timeouts for disk read operations should also be of interest to some people here given the previous discussions about JBOD vs RAID modes for disks. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Mon, Apr 16, 2012 at 11:20:13PM +1000, Russell Coker wrote:
That said, ZFS supports RAID-6 AKA RAID-Z2. Given the prices of 3TB disks and the fact that reasonably affordable servers can handle 8 disks which allows 18TB of RAID-6 storage it seems like a RAID-Z2 with ZFS is clearly a better
choice for most uses (the copy on write feature of ZFS apparently removes the worst performance problems of RAID-5 and RAID-6).
the thing that *really* eliminates the write performance problem is to add a fast SSD as a ZFS Intent Log ("ZIL" - a write cache). maybe a 120-ish GB SSD with a small (4GB, perhaps 8GB) partition for ZIL and the remainder for read-caching. a smaller SSD would be fine too, but larger SSDs tend to be faster, and they price difference between 60Gb and 120GB isn't that great. craig -- craig sanders <cas@taz.net.au> BOFH excuse #290: The CPU has shifted, and become decentralized.

On Mon, Apr 16, 2012 at 08:12:20PM +1000, Russell Coker wrote:
Anything else I should consider?
some things that occur to me: 1. zfs's "complexity" is more than offset by the length of time it has been in real world production use. ZFS is a production quality filesystem now, and btrfs really isn't yet. 2. zfs is very "sysadmin-friendly". and reliable. 3. whether compression would help, and what kinds of compression is offered by the filesystem. iirc, btrfs offers compression on subvolumes. zfs does too, and also offers several different compression methods. 4. backup, of course. e.g. LTO-4 or 5 tapes to backup the online data on btrfs or zfs or a second system to do rsync or snapshot + zfs send backups to. craig -- craig sanders <cas@taz.net.au> BOFH excuse #22: monitor resolution too high

On Tue, 17 Apr 2012, Craig Sanders wrote:
On Mon, Apr 16, 2012 at 08:12:20PM +1000, Russell Coker wrote:
Anything else I should consider?
some things that occur to me:
1. zfs's "complexity" is more than offset by the length of time it has been in real world production use. ZFS is a production quality filesystem now, and btrfs really isn't yet.
And it appeared on Solaris first, FreeBSD second, Linux third. It is worth a consideration but it needs reflection - the OS "zoo" syndrome, the inhouse (and your) knowledge, the licensing issue (less on FreeBSD but on Oracle side), the additional expenses I am not sure what the free (according to my knowledge) Solaris Express version offers, or going with Open Indiana instead. For my purpose I went with FreeBSD instead of Solaris, one of the reason was the uncertainty surrounding OpenSolaris at the time. Technically Solaris seems to be the best "home" for ZFS still, Solaris integrates NFS, SMB, iSCSI and also the ZFS memory (ARC) and ACL problems more or less seemless in ZFS and OS. At least they are all "Unix";-)
2. zfs is very "sysadmin-friendly". and reliable.
Agreed. But I do not run NFS on ZFS, and at least in the FreeBSD mailing-lists I see some issues. I would research whether ZFSOnLinux (and BTRFS) has these issues as well. For the issues I experienced myself (not NFS-related), they were solvable by kernel (sysctl and boot loader value) tuning. Regards Peter

Peter Ross wrote:
On Tue, 17 Apr 2012, Craig Sanders wrote:
1. zfs's "complexity" is more than offset by the length of time it has been in real world production use. ZFS is a production quality filesystem now, and btrfs really isn't yet.
That issue will (presumably) solve itself in time.
At least they are all "Unix";-)
That does not get you as far as you might think. For example, early versions of POSIX.1 did not require symlinks. *SYMLINKS!* Of course with your (Peter) fbsd background, you are probably inured to the hardships of going without a GNU userland :-)

On Tue, 17 Apr 2012, Trent W. Buck wrote:
Of course with your (Peter) fbsd background, you are probably inured to the hardships of going without a GNU userland :-)
I don't miss much these days. And at least I have man pages instead of a HowTo with the most common options;-) Solaris was never known for "the best userland ever" but it always had interesting features making it a worthwhile candidate. Set up some zones, play with Crossbar architecture, have a well-integrated ZFS.. that's actually fun and makes up for a missing --never-need-that-anyway option. And if it becomes --well-may-be-handy, just install it. It's not that you can't do it. And Solaris is robust. I am pretty sure that Russel's client does not like a spontanuous reboot now and then just because BTRFS is still alpha but may mature soon. He also may not like Craig's Out Of Memory with rsync on ZFSonLinux. For a storage box it is quite "average usage". I searched a bit for ZFS gotchas, and, not very surprising, could not find many relevant to ZFSonLinux tuning. NFS on top of ZFS on FreeBSD seems to be tricky but it is already discussed but seems to be working. My search for similar issues with ZFSonLinux did not pick up much. Could mean two things: it isn't an issue, or not many tried it under heavy load. Solaris solves the mentioned "single point of failure" issue nicely, iSCSI is well-integrated with ZFS so a mirror over iSCSI is easy to implement. Imagine getting a bowl of soup and a fork. You could try to bend it until it may work like a spoon, or just ask for a spoon. Regards Peter

On Tue, 17 Apr 2012, Peter Ross wrote:
On Tue, 17 Apr 2012, Trent W. Buck wrote:
Of course with your (Peter) fbsd background, you are probably inured to the hardships of going without a GNU userland :-)
I don't miss much these days. And at least I have man pages instead of a HowTo with the most common options;-)
Solaris was never known for "the best userland ever" but it always had interesting features making it a worthwhile candidate.
Set up some zones, play with Crossbar architecture, have a well-integrated ZFS.. that's actually fun and makes up for a missing --never-need-that-anyway option. And if it becomes --well-may-be-handy, just install it. It's not that you can't do it.
Hell, if you really want, you can use /usr/xpg4/bin/grep, /usr/bin/grep, /bin/grep, /usr/local/bin/grep, /opt/swf/bin/grep, /opt/SUNW*/bin/grep, and /usr/gnu/bin/grep simultaneously just to make sure it works! :) Someone kill it already, dammit. There is no _valid_ reason to be bugwards combatible all the way back to 1948. -- Tim Connors

Tim Connors wrote:
Hell, if you really want, you can use /usr/xpg4/bin/grep, /usr/bin/grep, /bin/grep, /usr/local/bin/grep, /opt/swf/bin/grep, /opt/SUNW*/bin/grep, and /usr/gnu/bin/grep simultaneously just to make sure it works! :)
Someone kill it already, dammit. There is no _valid_ reason to be bugwards combatible all the way back to 1948.
All parts should go together without forcing. You must remember that the parts you are reassembling were disassembled by you. Therefore, if you can't get them together again, there must be a reason. By all means, do not use a hammer. -- IBM maintenance manual, 1925

On 16/04/12 20:12, Russell Coker wrote:
ZFS seems to be a lot more complex than BTRFS.
I was using ZFSonLinux (not the FUSE version) for backups (as well as two external USB drives with btrfs and ext4), but the last couple of times I ran the rsync ZFS OOM'd my machine with 8GB of RAM.. not impressed. :-( -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

On Tue, 17 Apr 2012, Chris Samuel wrote:
On 16/04/12 20:12, Russell Coker wrote:
I was using ZFSonLinux (not the FUSE version) for backups (as well as two external USB drives with btrfs and ext4), but the last couple of times I ran the rsync ZFS OOM'd my machine with 8GB of RAM.. not impressed. :-(
Under FreeBSD I restrict ARC so it eats only part of the memory (I set it to 4GB when I had 8GB) According to http://comments.gmane.org/gmane.linux.file-systems.zfs.user/2172 under Linux it has to be put in here: # cat /etc/modprobe.d/zfs.conf options zfs zfs_arc_max=2147483648 That may have prevented your OOM. Regards Peter

On Tue, 17 Apr 2012, Peter Ross wrote:
On Tue, 17 Apr 2012, Chris Samuel wrote:
On 16/04/12 20:12, Russell Coker wrote:
I was using ZFSonLinux (not the FUSE version) for backups (as well as two external USB drives with btrfs and ext4), but the last couple of times I ran the rsync ZFS OOM'd my machine with 8GB of RAM.. not impressed. :-(
Under FreeBSD I restrict ARC so it eats only part of the memory (I set it to 4GB when I had 8GB)
According to http://comments.gmane.org/gmane.linux.file-systems.zfs.user/2172 under Linux it has to be put in here:
# cat /etc/modprobe.d/zfs.conf options zfs zfs_arc_max=2147483648
That may have prevented your OOM.
Just remembered this one: vfs.zfs.prefetch_disable=1 (FreeBSD sysctl, I assume ZFSonLinux has the same option) It also eases the memory pressure. I had a problem with sending data over the wire (large scp) while I had a busy VirtualBox (zimbra, our mail server) on it, I had to tune the network as well: net.graph.maxdata=65536 I can imagine that you can have similar problems as well. I found indicators in netstat errors. Regards Peter

On Tuesday 17 April 2012 10:44:14 Peter Ross wrote:
Under FreeBSD I restrict ARC so it eats only part of the memory (I set it to 4GB when I had 8GB)
Aha, thanks very much! -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC This email may come with a PGP signature as a file. Do not panic. For more info see: http://en.wikipedia.org/wiki/OpenPGP

On Fri, Apr 20, 2012 at 12:54:48PM +1000, Chris Samuel wrote:
On Tuesday 17 April 2012 10:44:14 Peter Ross wrote:
Under FreeBSD I restrict ARC so it eats only part of the memory (I set it to 4GB when I had 8GB)
Aha, thanks very much!
here's how to do it on linux: $ cat /etc/modprobe.d/zfs.conf # use minimum 1GB and maxmum of 4GB RAM for ZFS ARC options zfs zfs_arc_min=1073741824 zfs_arc_max=4294967296 craig -- craig sanders <cas@taz.net.au> BOFH excuse #84: Someone is standing on the ethernet cable, causing a kink in the cable

On Friday 20 April 2012 13:03:00 Craig Sanders wrote:
here's how to do it on linux:
Thanks - just updated to that, very handy! -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC This email may come with a PGP signature as a file. Do not panic. For more info see: http://en.wikipedia.org/wiki/OpenPGP

On 17/04/12 10:14, Chris Samuel wrote:
On 16/04/12 20:12, Russell Coker wrote:
ZFS seems to be a lot more complex than BTRFS.
I was using ZFSonLinux (not the FUSE version) for backups (as well as two external USB drives with btrfs and ext4), but the last couple of times I ran the rsync ZFS OOM'd my machine with 8GB of RAM.. not impressed. :-(
Did you have the dedup feature enabled? I found that one chewed up HUGE amounts of memory (as is well-documented on the internet).

On Tuesday 17 April 2012 10:49:49 Toby Corkindale wrote:
Did you have the dedup feature enabled?
Bingo - yes I had, from the early ZFS/FUSE days. -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC This email may come with a PGP signature as a file. Do not panic. For more info see: http://en.wikipedia.org/wiki/OpenPGP

On Tue, Apr 17, 2012 at 10:14:14AM +1000, Chris Samuel wrote:
On 16/04/12 20:12, Russell Coker wrote:
ZFS seems to be a lot more complex than BTRFS. I was using ZFSonLinux (not the FUSE version) for backups (as well as two external USB drives with btrfs and ext4), but the last couple of times I ran the rsync ZFS OOM'd my machine with 8GB of RAM.. not impressed. :-(
instead of arc limits you could also try https://github.com/behlendorf/zfs/tree/vm "Integrate ARC more tightly with Linux" cheers, robin

On 16/04/12 20:12, Russell Coker wrote:
ZFS seems to be a lot more complex than BTRFS. While having more features is a good thing (BTRFS seems to be missing some sysadmin friendly features) complexity means more testing and more potential for making mistakes.
I've played around with ZFS and btrfs quite a bit by now, and I'm still not happy with btrfs. It's just too easy to get a total kernel segfault on btrfs -- whereas ZFS just keeps chugging away reliably. I suspect this is because btrfs is still basically pre-alpha software -- it's in active development, there's been no attempt to feature-freeze and debug it, and there won't be for some time. However ZFS has been around for years on other platforms, and just needed to be ported to Linux's VFS system. That code has been in a kind of beta-stage for some time now, and it's bedded down fairly well. Toby

On Tuesday 17 April 2012 11:04:35 Toby Corkindale wrote:
I suspect this is because btrfs is still basically pre-alpha software -- it's in active development, there's been no attempt to feature-freeze and debug it, and there won't be for some time.
Agree on the pre-alpha and feature freeze it, but I do disagree on the debugging part, there is a lot of work going on to try and shake out bugs including a set of infrastructure recently included that you can compile in to try and catch cases where the filesystem would get left in an inconsistent state on disk should the power fail. There is, of course, a performance penalty for that. :-) But yes, there are still a lot of bugs and as we've seen with 3.3 there can be regressions between kernel releases.. -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC This email may come with a PGP signature as a file. Do not panic. For more info see: http://en.wikipedia.org/wiki/OpenPGP

On 17/04/12 11:25, Trent W. Buck wrote:
Russell Coker wrote:
One of my clients needs to reliably store terabytes of data which is mostly comprised of data files in the 10MB - 15MB size range. The data files will almost never be re-written [...]
Will they ever be re-read? ;-)
For that matter, do you really need all the files stored on a single machine? No matter what level of RAID you use, you still have a single point of failure in the server hardware. Why not pick up several smaller servers and use a cluster filesystem instead?

On Tue, 17 Apr 2012, Toby Corkindale <toby.corkindale@strategicdata.com.au> wrote:
On 17/04/12 11:25, Trent W. Buck wrote:
Russell Coker wrote:
One of my clients needs to reliably store terabytes of data which is mostly comprised of data files in the 10MB - 15MB size range. The data files will almost never be re-written [...]
Will they ever be re-read? ;-)
Yes. There will be batch jobs that process them.
For that matter, do you really need all the files stored on a single machine? No matter what level of RAID you use, you still have a single point of failure in the server hardware.
Why not pick up several smaller servers and use a cluster filesystem instead?
A cluster filesystem requires significantly more work to administer. It needs to have at least 3 servers to avoid split-brain problems and will either involve performance loss (in the case of a cluster where the blocks are shared via Ethernet) or significant extra expense (in the case of shared block devices). http://www.dell.com/au/business/p/poweredge-t110-2/pd http://www.dell.com/au/business/p/poweredge-t610/pd A cluster filesystem also doesn't solve any real problems. A Dell server which can handle 8 disks (all that is needed for the next few years) is not particularly expensive. A PowerEdge T610 costs $2100 plus about $200 per disk (or a lot more per disk if you buy disks from Dell). A PowerEdge T110 costs $700 and can handle 4 disks. If you had three of the T110 servers in a cluster they would cost the same as a single T610 and provide the same effective capacity, however you would purchase 12 disks instead of 8 and thus the hardware cost would involve an additional 4 disks ($800 or more if you buy disks from Dell). Then there's the cost of sysadmin work. Now one thing that a cluster can solve is a server failing at some unexpected time (*). But in this case the use is going to be 9-5 operation. Batch jobs will be run overnight, but getting a cluster fail-over event to not interrupt the batch jobs would be more effort than it's worth. Getting a system that can handle 8*SATA disks isn't THAT difficult. In the unlikely event that a Dell server entirely broke and Dell couldn't fix it fast enough it would be OK to install a white-box system as a temporary replacement. So the failure recovery case will not be preserving 24*7 operation, but not wasting too much 9-5 time after the problem has been discovered. http://etbe.coker.com.au/2010/08/04/clusters-dont-work/ (*) In theory, in practice I haven't observed that happening. See the above URL. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Hi Russell, On Tue, 17 Apr 2012, Russell Coker wrote:
A Dell server which can handle 8 disks (all that is needed for the next few years) is not particularly expensive. A PowerEdge T610 costs $2100 plus about $200 per disk (or a lot more per disk if you buy disks from Dell). A PowerEdge T110 costs $700 and can handle 4 disks. .. But in this case the use is going to be 9-5 operation. Batch jobs will be run overnight, but getting a cluster fail-over event to not interrupt the batch jobs would be more effort than it's worth. Getting a system that can handle 8*SATA disks isn't THAT difficult. In the unlikely event that a Dell server entirely broke and Dell couldn't fix it fast enough it would be OK to install a white-box system as a temporary replacement. So the failure recovery case will not be preserving 24*7 operation, but not wasting too much 9-5 time after the problem has been discovered.
I have a pair of Dell servers (T410) inhouse that run amongst others (as the most storage-hungry part) a samba server for a company (60 Windows clients) from 9 to 5. It's not TeraBytes yet, it's slightly below 1, and it using two disks in a mirror. (Not exactly, I have a small base system on a UFS and Swap partition, mainly to avoid deadlocks related to Swap on ZFS - BTRFS does not support swap on it at all yet, AFAIK). I use ZFS and send snapshots (zfs send/receive) to a standby samba jail, and from there weekly backups go to a set of external USB disks. The failover is manual (bringing up the stand-up jail) but wasn't needed over the year I have it in place now. I do the things mentioned before (especially restricting ARC to half of the memory and disabling prefetching) and it works. If needed, I would check what ZFSonLinux does in regards of ACLs in combination of NFS (version?) and ZFS. I am just not informed about it at the moment of writing. Regards Peter

On 16 April 2012 20:12, Russell Coker <russell@coker.com.au> wrote: <...>
ZFS seems to be a lot more complex than BTRFS. While having more features is a good thing (BTRFS seems to be missing some sysadmin friendly features) complexity means more testing and more potential for making mistakes.
Of course it might turn out that RAID-5 is the killer issue. Servers start becoming a lot more expensive if you want more than 8 disks and even 6 disks is a significant price point. An 8 disk RAID-5 gives something like 21TB usable space vs 12TB on a RAID-10 and a 6 disk RAID-5 gives about 15TB vs 9TB on a RAID-10.
Anything else I should consider?
Depending on your client's budget, you might also want to consider a refurbished Sun X4500 (Thumper) with 24/48TB of storage. With ZFS, use could two disks for your root pool, and the remaining 46 for 4x raidz2 pools, which would give the best compromise for redundancy and performance. Running Solaris would give you the added benefit that it "just works", with boot/root on ZFS, and integrated NFS and iSCSI. -- Joel Shea <jwshea@gmail.com>
participants (10)
-
Chris Samuel
-
Colin Fee
-
Craig Sanders
-
Joel W Shea
-
Peter Ross
-
Robin Humble
-
Russell Coker
-
Tim Connors
-
Toby Corkindale
-
Trent W. Buck