From: "Noah O'Donoghue" <noah.odonoghue(a)gmail.com>
> On reading more about ZFS, is it true the latest source code isn't
> available for ZFS? So Sun is withholding new features, fixes etc from the
> codebase?
AFAIK, the open source ZFS "lives" on illumos, FreeBSD and zfsonlinux,
coordinated via OpenZFS.
The development seems to be independent from Sun/Oracle these days. I am
not aware of active contributions from Oracle but I am not 100% sure.
E.g. newest open source ZFS versions have "feature flags" instead of the
version numbers as used by Oracle.
For Russell: Have you seen this?
https://wiki.freebsd.org/ZFS
The first TODO entry is about file(1) and magic.
Regards
Peter
I have been reading the "btrfs/ZFS, sans raid and bitrot" thread and a
number of thoughts and questions spring to mind.
I get the impression that some are looking for a single reliable storage
solution to avoid having to do backups.
Surely this is impossible, I certainly would ______NEVER______ (excuse the
shouting) ever trust my life to a single system if at all possible. When
one is doing instrument flying training as a pilot you are constantly told
never to rely on a single instrument but scan all of them and come up with
an overall coherent picture. If one relies in such circumstances on a
single point of failure you __will__ kill yourself.
One is told raid or any such thing is a reliabilty strategy __not__ a
backup strategy.
I personally keep all data I consider important on four separate
systems/devices one device (which is in fact duplicate items but differing
technolgies) being kept off site. Maintaining this is a bit of a pain but
there is no other way as far as I can see.
The reason for the number of separate backups is we had in one instance in
a large commercial situation managed to destroy two backs trying to restore
a system. We only succeeded in the end becuase I had independantly
duplicated one of the backups on another system.
Lindsay
Hi All,
After reading about bitrot and feeling guilty for storing my most valuable
data on cheap drives (although with backups!) I've been thinking about
moving to something more resilient.
My current setup is a Ubuntu laptop, with 2 external drives.
1X 2TB ext4 for data storage
1X 3TB ext4 for backup (using Crashplan commercial backup software).
My question, is if I change the first drive to btrfs or ZFS, will I gain
resiliency from bitrot?
My understanding is I need 2 drives in at least a RAID 1 to get automatic
healing from bitrot, but if I at least use a filesystem with check summing
support then I will be able to at least restore my affected files from my
Crashplan backups (which are compressed then checksummed and regularly
checked for errors automatically) and I won't have the risk of my main
drive corrupting my backups, because the read will FAIL if it doesn't pass
the checksum.
Is my understanding correct?
Cheers,
Noah
On Tue, 1 Jul 2014 10:03:44 Rohan McLeod wrote:
> Noah O'Donoghue wrote:
> > Hi All,
> >
> > After reading about bitrot and feeling guilty for storing my most
> > valuable data on cheap drives (although with backups!) I've been
> > thinking about moving to something more resilient.
>
> Out of curiosity I googled "bitrot" and whilst there seems to be some
> usage of "bitrot" in relation to RAM;
> mostly it seems to be in the context of storage media.
> As a novice I found:
>
> http://arstechnica.com/information-technology/2014/01/bitrot-and-atomic-cows
> -inside-next-gen-filesystems/
http://en.wikipedia.org/wiki/Write_Anywhere_File_Layout
That article claims that ZFS is the oldest of the "next generation
filesystems". WAFL did it first and NetApp (the developer of WAFL) sued Sun
alleging patent violation in ZFS.
> informative; but apart from a suggestion that it might be related to
> 'cosmic rays' and thermal magnetic effects;
> couldn't seem to find
> (a) a definition which is a measure of bitrot and
> (b) actual measures of this phenomenon in various media and differing
> conditions.
>
> Presumably as a probabilistic phenomenon; bitrot might be defined in
> terms of the half-life of the data ?
http://research.cs.wisc.edu/adsl/Publications/corruption-fast08.html
The above paper is the best reference I've seen. Half-life isn't a good
measure as you can expect to lose ~50 sectors at a time on a TB+ disk.
On Tue, 1 Jul 2014 12:29:39 Peter Ross wrote:
> For Russell: Have you seen this?
>
> https://wiki.freebsd.org/ZFS
>
> The first TODO entry is about file(1) and magic.
That is about ZFS dump files (the output of "zfs send") not the block devices.
As I have never run zfs send and don't have any immediate plans to do so this
hasn't been a concern for me. Thanks for the suggestion though, I've attached
it to the Debian bug report.
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/