Re: ZFS vs RAID (was gpt and grub)

Quoting "Craig Sanders" <cas@taz.net.au>
On Fri, Apr 12, 2013 at 03:31:20PM +1000, Kevin wrote:
On Fri, Apr 12, 2013 at 3:17 PM, James Harper <james.harper@bendigoit.com.au> wrote:
My last remaining reservation on going ahead with some testing is is there an equivalent of clvm for zfs? Or is that the right approach for zfs? My main server cluster is:
2 machines each running 2 x 2TB disks with DRBD with the primary exporting the whole disk as an iSCSI volume 2 machines each importing the iSCSI volume running lvm (clvm) on top, and using the lv's as backing stores for xen VM's.
How would this best be done using zfs?
short answer: zfs doesn't do that.
in theory you could export each disk individually with iscsi and build ZFS pools (two mirrored pools). if that actually worked, you'd have to do a lot of manual stuffing around to make sure that the pools were only in use on one machine at a time, and more drudgery to handle fail-over events. seems like a fragile PITA and not worth the bother, even if it could be made to work.
James, you want ZFS mirrored over the net as the backend for VM storage (virtual disks?) In case you really want.. "Poor man's version" similar as Russell described (zfs send/receive) is used by me: I am looking for running Jails and VirtualBoxes (well, the later is just one here..) and run zfs snapshot/send/receive to send all filesystems of the active Jails/VMs to the other side. In case one machine is broken I bring them up on the "other side". To mirror them without delay, you can create, one both sides, a volume on top of ZFS (zfs create -V 5gb MyPool/MyVM), export them over iscsi and mirror them. The fail-over between Virtual Machines is a problem independent of the sharing on the file system layer. A semaphore on the shared volume can be used for a cluster solution to manage it. BTW: You can put any file system, including a clustered one, inside these ZFS volumes, and combine the benefits of ZFS volume management with the benefits of the filesystem "inside". Regards Peter
participants (1)
-
Petros