
I originally sent this direct to James resending it to the list
On Fri, Apr 12, 2013 at 3:17 PM, James Harper < james.harper@bendigoit.com.au> wrote:
. Online resize/reconfigure
both btrfs and zfs offer this.
Can it seamlessly continue over reboot? Obviously it can't progress while the system is rebooting like a hardware raid but I'd hope it could pick up where it left of automatically.
Yes it does.
This is where a lot of people get this wrong. Once the BIOS has succeeded in reading the bootsector from a boot disk it's committed. If the bootsector reads okay (even after a long time on a failing disk) but anything between the bootsector and the OS fails, your boot has failed. This 'anything between' includes the grub bootstrap, xen hypervisor, linux kernel, and initramfs, so it's a substantial amount of data to read from a disk that may be on its last legs. A good hardware RAID will have long since failed the disk by this point and booting will succeed.
My last remaining reservation on going ahead with some testing is is there an equivalent of clvm for zfs? Or is that the right approach for zfs? My main server cluster is:
2 machines each running 2 x 2TB disks with DRBD with the primary exporting the whole disk as an iSCSI volume 2 machines each importing the iSCSI volume running lvm (clvm) on top, and using the lv's as backing stores for xen VM's.
How would this best be done using zfs?
If i was building new infrastructure today with 2 or more machines hosting VMs i would probably look at using CEPH as the storage layer for the Virtual machines. this would provide distributed mirrored storage that is accessible from all machines. all machines could then be storage and VM hosts. ref: http://www.slideshare.net/xen_com_mgr/block-storage-for-vms-with-ceph http://ceph.com/