
I have a Debian/Squeeze system that's running ZFS quite nicely. Now I've just tried to install ZFS via the http://zfsonlinux.org/ method on a Debian/Testing system. I'm deploying a new server in the next week or so and it'll ramp up to full production over the course of the month after that, so it seems best to start with Wheezy even a little before the full release to avoid a painful upgrade later. The code doesn't currently build with kernel 3.2.0. Has anyone got it to work? Also has anyone got root on ZFS to work with Debian? ZFS wants to partition the disks itself so it can use 4K sectors and properly align things. I don't want to mess with that and the server in question will have an internal USB port that's ideal for booting such configurations. So I want to get a USB device with GRUB and an initramfs that starts ZFS for the root filesystem. I have already had a hack at it on Squeeze, but I wasn't able to get it going in the time I had available. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 05/07/12 21:24, Russell Coker wrote:
I have a Debian/Squeeze system that's running ZFS quite nicely.
Now I've just tried to install ZFS via the http://zfsonlinux.org/ method on a Debian/Testing system. I'm deploying a new server in the next week or so and it'll ramp up to full production over the course of the month after that, so it seems best to start with Wheezy even a little before the full release to avoid a painful upgrade later.
The code doesn't currently build with kernel 3.2.0. Has anyone got it to work?
Works fine for me on Ubuntu 12.04 with kernel 3.2.0.

On Thu, Jul 05, 2012 at 09:24:44PM +1000, Russell Coker wrote:
I have a Debian/Squeeze system that's running ZFS quite nicely.
Now I've just tried to install ZFS via the http://zfsonlinux.org/ method on a Debian/Testing system. I'm deploying a new server in the next week or so and
i recommend not using that method. Instead, re-compile the ubuntu packages for debian. make sure that dkms and the kernel-header package for your current kernel are installed mkdir /usr/local/src/zfs/ cd /usr/local/src/zfs/ git clone https://github.com/dajhorn/pkg-spl.git git clone https://github.com/dajhorn/pkg-zfs.git cd pkg-spl optionally run 'dch -i' to change the version number dpkg-buildpackage -b -us -uc cd .. install the freshly built spl and spl-dkms packages cd pkg-zfs vi debian/control, search for the dependancy on zfs-grub, change to grub optionally run 'dch -i' to change the version number dpkg-buildpackage -b -us -uc cd .. install the freshly built zfs packages, libnvpair1 libuutil1 libzfs1 libzpool1 zfs-dkms and zfsutils the last time I upgraded ZFS, i ended up with: 58686 Jul 2 12:32 libnvpair1_0.6.0.65-0ubuntu1_amd64.deb 67570 Jul 2 12:32 libuutil1_0.6.0.65-0ubuntu1_amd64.deb 142480 Jul 2 12:32 libzfs1_0.6.0.65-0ubuntu1_amd64.deb 980710 Jul 2 12:32 libzfs-dev_0.6.0.65-0ubuntu1_amd64.deb 435774 Jul 2 12:32 libzpool1_0.6.0.65-0ubuntu1_amd64.deb 30172 Jul 2 11:34 spl_0.6.0.65-0ubuntu1_amd64.deb 534526 Jul 2 11:34 spl-dkms_0.6.0.65-0ubuntu1_all.deb 1229 Jul 2 11:34 spl-linux_0.6.0.65-0ubuntu1_amd64.changes 1972940 Jul 2 12:32 zfs-dkms_0.6.0.65-0ubuntu1_amd64.deb 31264 Jul 2 12:32 zfs-initramfs_0.6.0.65-0ubuntu1_amd64.deb 3702 Jul 2 12:32 zfs-linux_0.6.0.65-0ubuntu1_amd64.changes 301284 Jul 2 12:32 zfsutils_0.6.0.65-0ubuntu1_amd64.deb note: I have no idea if the above works if you have root on ZFS. I believe debian's grub package may support zfs but i've never really cared enough to find out. The above method definitely works for non-root filesystems.
The code doesn't currently build with kernel 3.2.0. Has anyone got it to work?
works for me. Linux ganesh 3.2.0-2-amd64 #1 SMP Mon Jun 11 17:24:18 UTC 2012 x86_64 GNU/Linux ii libnvpair1 0.6.0.65-0ubuntu1 amd64 Solaris name-value library for Linux ii libuutil1 0.6.0.65-0ubuntu1 amd64 Solaris userland utility library for Linux ii libzfs1 0.6.0.65-0ubuntu1 amd64 Native ZFS filesystem library for Linux ii libzpool1 0.6.0.65-0ubuntu1 amd64 Native ZFS pool library for Linux ii spl 0.6.0.65-0ubuntu1 amd64 Solaris Porting Layer utilities for Linux ii spl-dkms 0.6.0.65-0ubuntu1 all Solaris Porting Layer kernel modules for Linux ii zfs-dkms 0.6.0.65-0ubuntu1 amd64 Native ZFS filesystem kernel modules for Linux ii zfsutils 0.6.0.65-0ubuntu1 amd64 Native ZFS management utilities for Linux
Also has anyone got root on ZFS to work with Debian? ZFS wants to partition the disks itself so it can use 4K sectors and properly align things.
never tried it. have always intended to experiment on a VM just so I understand how it works and the potential problems, but i never got around to it :)
I don't want to mess with that and the server in question will have an internal USB port that's ideal for booting such configurations. So I want to get a USB device with GRUB and an initramfs that starts ZFS for the root filesystem.
good plan. I planned to do that on one system, but ended up getting a 120GB SSD instead, and partitioned it to use about half for boot and root partitions, and the remainder for a small log and large cache partition. SSDs are down to about $1/GB now, about half what they were when i got the 120GB SSD for that server, so i'd probably use a 240GB today. or maybe two 120GBs. craig -- craig sanders <cas@taz.net.au>

Craig Sanders wrote:
note: I have no idea if the above works if you have root on ZFS. I believe debian's grub package may support zfs but i've never really cared enough to find out. The above method definitely works for non-root filesystems.
(Correct me if I'm wrong, but) grub support only matters if /boot is on ZFS. If /boot is (say) ext2 and / is ZFS, then it should be fine providing the ramdisk supports ZFS.
I don't want to mess with that and the server in question will have an internal USB port that's ideal for booting such configurations. So I want to get a USB device with GRUB and an initramfs that starts ZFS for the root filesystem.
good plan.
Don't forget to backup the /boot onto the HDDs and/or a second USB key, though :-)
SSDs are down to about $1/GB now, about half what they were when i got the 120GB SSD for that server, so i'd probably use a 240GB today. or maybe two 120GBs.
Hm, not quite there yet (sans sales): $ msy | LC_ALL=C w3m -dump -T text/html | foldr egrep -- 2.5 [0-9]+G SSD SATA | sort -nk3 -t'|' |(NSW Auburn Clearance) Kingston (SV100S2/64GB) 2.5" 64GB SATA2 SSD HDD |50 | |Kingston SVP200S3/60G 60GB SSDNOW V+200 SATA3 2.5" SSD HDD |81 | |(NSW Auburn Clearance) Kingston (SV100S2/128GB) 2.5" 128GB SATA2 SSD |85 | |Patriot 2.5 60GB PYRO Sanforce-SF-2281 2.5 inch SATA3 SSD Solid |85 | |Kingston SVP200S3/120G 120GB SSDNOW V+200 SATA3 2.5" SSD HDD |109 | |Patriot 2.5 120GB PYRO Sanforce-SF-2281 2.5 inch SATA3 SSD Solid |129 | |Kingston 2.5 inch 120GB HyperX SATA3 SSD Solid State Drive SH100S3/ |145 | |Patriot 2.5 120GB Wildfire Sanforce-SF-2281 2.5 inch SATA3 SSD |195 | |OCZ Agility4 256GB SATA3 SSD AGT4-25SAT3-256G |265 |

On Sat, Jul 07, 2012 at 05:10:38PM +1000, Trent W. Buck wrote:
Craig Sanders wrote:
note: I have no idea if the above works if you have root on ZFS. I believe debian's grub package may support zfs but i've never really cared enough to find out. The above method definitely works for non-root filesystems.
(Correct me if I'm wrong, but) grub support only matters if /boot is on ZFS. If /boot is (say) ext2 and / is ZFS, then it should be fine providing the ramdisk supports ZFS.
quite likely. i haven't cared enough about root-on-zfs to find out :) hmmm. having boot on ext2 would partly defeat the benefit of having root on zfs. you don't get to give the entire disk to zfs (needed for zfsonlinux to disable barriers)
SSDs are down to about $1/GB now, about half what they were when i got the 120GB SSD for that server, so i'd probably use a 240GB today. or maybe two 120GBs.
Hm, not quite there yet (sans sales):
$ msy | LC_ALL=C w3m -dump -T text/html | foldr egrep -- 2.5 [0-9]+G SSD SATA | sort -nk3 -t'|' [...]
check their parts.pdf - last one i downloaded on 29/6 had Sandisk SSD G25 120GB for $129, 240GB for $249, and 480GB for $489. just fetched the latest - today's PARTS.PDF has them for: Sandisk SSD Extreme 120G / 240G / 480G $129 / $240 / $480 i googled the brand & model number and found a few fairly good reviews of them. also spotted in today's pdf that they have OCZ Agility3 120GB & 240GB models for about $0.90/GB. OCZ Agility3 SATA3 60G / 120G / 240G $65 / $109 / $209 craig -- craig sanders <cas@taz.net.au> BOFH excuse #376: Budget cuts forced us to sell all the power cords for the servers.

On Sun, 8 Jul 2012, Craig Sanders <cas@taz.net.au> wrote:
(Correct me if I'm wrong, but) grub support only matters if /boot is on ZFS. If /boot is (say) ext2 and / is ZFS, then it should be fine providing the ramdisk supports ZFS.
quite likely. i haven't cared enough about root-on-zfs to find out :)
Yes. The difficulty with ZFS is that you don't mount a block device on a mountpoint, you have the ZFS tools do the creation and mounting for you. So you create a volume group named "tank" and have a filesystem named "root" and it is automatically mounted as /tank/root along side /tank/isos, etc.
hmmm. having boot on ext2 would partly defeat the benefit of having root on zfs. you don't get to give the entire disk to zfs (needed for zfsonlinux to disable barriers)
No, you just use a USB stick or something to boot. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Sun, Jul 8, 2012 at 1:27 AM, Russell Coker <russell@coker.com.au> wrote:
On Sun, 8 Jul 2012, Craig Sanders <cas@taz.net.au> wrote:
(Correct me if I'm wrong, but) grub support only matters if /boot is on ZFS. If /boot is (say) ext2 and / is ZFS, then it should be fine providing the ramdisk supports ZFS.
quite likely. i haven't cared enough about root-on-zfs to find out :)
Yes. The difficulty with ZFS is that you don't mount a block device on a mountpoint, you have the ZFS tools do the creation and mounting for you. So you create a volume group named "tank"
Which brings up something that has made me curious. Why does everyone name their zfs pool (and sometimes filesystem too) 'tank'? Is it just because that's what all the examples do, and people cut and paste, or don't want to deviate? Or superstition? Or tradition? Because when I gave ZFS a test, I refused, and named them something else, and not long after I had a hardware failure. Could this have been caused by an angry ZFS ghost who wasn't happy with my tankless system? / Brett

On Sun, Jul 08, 2012 at 08:54:40AM +1000, Brett Pemberton wrote:
Why does everyone name their zfs pool (and sometimes filesystem too) 'tank'?
Is it just because that's what all the examples do, and people cut and paste, or don't want to deviate?
most likely that. it's what i did with my first experiments with zfs. the name doesn't matter and it's less hassle working through examples if you don't have to translate the name all the time. when i'd learnt what i needed to know, i blew it away and created 'export' and 'backup' pools. /export for tradition, and because parts of it are exported by NFS. mostly because it's a usefully generic name. not having to modify lots of scripts to s/export/tank/g was influential too. /backup for functionally-descriptive naming
Because when I gave ZFS a test, I refused, and named them something else, and not long after I had a hardware failure.
Could this have been caused by an angry ZFS ghost who wasn't happy with my tankless system?
or maybe it was a warning from the scsi gods - not enough blood sacrifice these days, they're getting pissed off. craig -- craig sanders <cas@taz.net.au>

On 08/07/2012, at 11:48, Craig Sanders <cas@taz.net.au> wrote:
or maybe it was a warning from the scsi gods - not enough blood sacrifice these days, they're getting pissed off.
I, for one, welcome the demise of our SCSI overlords. I once, only after days of troubleshooting, found out an array of 12 disks I was numbering with jumpers, had jumpers on both the front and back of the disks. Only one was meant to be used, but having to count binary for 4 bits per disk, by 2 per disk, by 12 disks, took ages. Ugh.

On Sun, Jul 08, 2012 at 01:03:50PM +1000, hannah commodore wrote:
or maybe it was a warning from the scsi gods - not enough blood sacrifice these days, they're getting pissed off.
I, for one, welcome the demise of our SCSI overlords. I once, only after days of troubleshooting, found out an array of 12 disks I was numbering with jumpers, had jumpers on both the front and back of the disks. Only one was meant to be used, but having to count binary for 4 bits per disk, by 2 per disk, by 12 disks, took ages. Ugh.
that calls for goats at least, not mere chickens. ya gotta get the rituals just right. and lots of people think scsi "terminators" were just resistor packs you stuck on the end of the chain. naive fools. dangerous fools. What part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah' nagl fhtagn" do they not understand? craig -- craig sanders <cas@taz.net.au> BOFH excuse #96: Vendor no longer supports the product

On Sun, 8 Jul 2012, Brett Pemberton <brett.pemberton@gmail.com> wrote:
Which brings up something that has made me curious. Why does everyone name their zfs pool (and sometimes filesystem too) 'tank'?
Is it just because that's what all the examples do, and people cut and paste, or don't want to deviate?
Using the same name as the examples makes it easier and it also makes it easier for other people to work out what's happening (any time you see a bunch of big devices mounted under /tank you know it's ZFS). Also tank is known to work, we can predict that some names like "usr" may give bad results and it's difficult to predict some other possible implications. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Sunday 08 July 2012 08:54:40 Brett Pemberton wrote:
Which brings up something that has made me curious. Why does everyone name their zfs pool (and sometimes filesystem too) 'tank'
Mine's called "ZFS", just to be different.. :-) -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC This email may come with a PGP signature as a file. Do not panic. For more info see: http://en.wikipedia.org/wiki/OpenPGP

On Sun, 8 Jul 2012, Brett Pemberton wrote:
Which brings up something that has made me curious. Why does everyone name their zfs pool (and sometimes filesystem too) 'tank'?
Is it just because that's what all the examples do, and people cut and paste, or don't want to deviate? Or superstition? Or tradition?
Why not? It's not a bad name. Maybe my choice of tank2 and tank3 leave something to be desired though. If only zfs could deal with different sized devices like btrfs, I wouldn't need all those extra tanks.
Because when I gave ZFS a test, I refused, and named them something else, and not long after I had a hardware failure. Could this have been caused by an angry ZFS ghost who wasn't happy with my tankless system?
Yes. I would have thought the battle tank ammunition used against it would have been a dead giveaway that the tanks were offended. -- Tim Connors

Brett Pemberton wrote:
Which brings up something that has made me curious. Why does everyone name their zfs pool (and sometimes filesystem too) 'tank'?
You think that's bad? I was trying AFS once, and I can't remember the details but everything is under something like /freddy and I asked their IRC channel "why freddy?" and they said "well, that was the first host that ran AFS, and now it is hard-coded throughout the entire codebase, so everyone must refer to it FOR EVER".
participants (8)
-
Brett Pemberton
-
Chris Samuel
-
Craig Sanders
-
hannah commodore
-
Russell Coker
-
Tim Connors
-
Toby Corkindale
-
Trent W. Buck