
FYI: thread titled "What is the current status of native ZFS on Linux?" http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss/browse_thread/th... includes this paragraph from http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss/msg/9bd84a5e8d93... by Richard Yao: "As far as performance goes, ZFSOnLinux raidz2 outperforms a combination of MD RAID 6, LVM and ext4 on 6 Samsung HD204UI drives that I own. ZFS raidz2 has 220 MB/sec write performance in writing a 4GB file while the combination of MD RAID 6, LVM and ext4 only managed 20MB/sec." somebody in the thread mentioned that they get abysmal performance with zvols (chunks of a zpool allocated to be a "disk" - similar to an LVM logical volume). That really surprises me, I've had fantastic performance from them. All of my VMs are now on zvols....and i've done a lot of testing of zfs in a VM with zpools made up of lots of 100-200MB zvols. Same person also mentioned stability problems until they changed from onboard drive controller to an LSI controller...unfortunately, he doesn't mention what motherboard or kind of ports, or what kind of LSI controller (I'd guess one of the cheap HBAs like the 9211-8i or the IBM M1015 as they are very popular for running ZFS on linux, open solaris, and freebsd) craig ps: speaking of cheap HBAs, I bought a few of the IBM M1015s from ebay. they took only a few days to arrive. I reflashed one of them to IT mode on a spare machine and then replaced my supermicro card with it (which I then also reflashed to IT mode in the spare machine...didn't want to risk it until now in case i bricked the card. this will go into my myth box to replace the 4-port adaptec 1430sa) Haven't noticed any difference at all, but that's good. I wasn't expecting any performance difference, just more relaxed timeouts for the consumer-grade WD Green drives i'm using. When prices comes down enough I'll replace them with 3TB drives, probably Hitachi or Seagate. -- craig sanders <cas@taz.net.au> BOFH excuse #126: it has Intel Inside