
I'd like to host a Linux LAN party. I have a CBD location we can use on a weekend, it's good for public transport access and if we have it on a Sunday there is some free parking in the area. There is free coffee and hot chocolate and a fridge for anyone who wants to bring soda. Free Wifi and I can setup NAT on my laptop to provide LAN access. Ordering pizza delivery should be possible. Contact me off-list if you are interested. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

the announcement is at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/htm...

On Wednesday, 16 August 2017 3:13:18 PM AEST Steve Roylance via luv-main wrote:
the announcement is at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/htm l/7.4_Release_Notes/chap-Red_Hat_Enterprise_Linux-7.4_Release_Notes-Deprecat ed_Functionality.html _______________________________________________ luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main
No big deal, just use Debian or Ubuntu if you want to do serious storage on Linux. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Quoting Steve Roylance (roylance@corplink.com.au):
the announcement is at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/htm...
Maybe Stratis after interim use of XFS. https://www.phoronix.com/scan.php?page=news_item&px=Stratis-Red-Hat-Project https://stratis-storage.github.io/StratisSoftwareDesign.pdf It's funny seeing XFS make a resurgence. I used it on Debian back before ext3 had become mainstream. At the time, it seemed solid technology but its Linux future was (then) in doubt because it was a huge patchset.

On Wednesday, 16 August 2017 11:12:06 AM AEST Rick Moen via luv-main wrote:
Maybe Stratis after interim use of XFS. https://www.phoronix.com/scan.php?page=news_item&px=Stratis-Red-Hat-Project https://stratis-storage.github.io/StratisSoftwareDesign.pdf
It's funny seeing XFS make a resurgence. I used it on Debian back before ext3 had become mainstream. At the time, it seemed solid technology but its Linux future was (then) in doubt because it was a huge patchset.
Last time I checked XFS had no support for reducing the size of a filesystem. ZFS also has no support for that so it's not necessarily a huge problem. But if Stratis is going to use multiple XFS filesystems to compare with the multiple ZFS mount points or BTRFS subvols then it will be a massive problem. Stratis is aiming for a version 1.0 release next year, and version 3.0 is aimed at having ZFS feature parity. That's not good for all the people who need ZFS features today! XFS has no support for checksums that compares to ZFS and BTRFS. To do it properly you need to do it in the filesystem. I guess that Stratis could use DM to have a checksum layer but there would be some overheads in trying to do it that way if you also want to deal with missing writes. Another option is to change XFS to have checksums, but that would be a huge change (fixing whatever problems they apparently have with BTRFS would be easier). -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 17 Aug. 2017 13:22, "Russell Coker via luv-main" <luv-main@luv.asn.au> wrote: On Wednesday, 16 August 2017 11:12:06 AM AEST Rick Moen via luv-main wrote:
Maybe Stratis after interim use of XFS. https://www.phoronix.com/scan.php?page=news_item&px=Stratis- Red-Hat-Project https://stratis-storage.github.io/StratisSoftwareDesign.pdf
It's funny seeing XFS make a resurgence. I used it on Debian back before ext3 had become mainstream. At the time, it seemed solid technology but its Linux future was (then) in doubt because it was a huge patchset.
Last time I checked XFS had no support for reducing the size of a filesystem. ZFS also has no support for that so it's not necessarily a huge problem. But if Stratis is going to use multiple XFS filesystems to compare with the multiple ZFS mount points or BTRFS subvols then it will be a massive problem. Stratis is aiming for a version 1.0 release next year, and version 3.0 is aimed at having ZFS feature parity. That's not good for all the people who need ZFS features today! XFS has no support for checksums that compares to ZFS and BTRFS. To do it properly you need to do it in the filesystem. I guess that Stratis could use DM to have a checksum layer but there would be some overheads in trying to do it that way if you also want to deal with missing writes. Another option is to change XFS to have checksums, but that would be a huge change (fixing whatever problems they apparently have with BTRFS would be easier). Both XFS and btrfs enthusiastically like to silently throw any data written in the past 5 days on the floor when there's a power failure/kernel panic, so there's that commonality.

On Wednesday, 16 August 2017 11:12:06 AM AEST Rick Moen via luv-main wrote:
It's funny seeing XFS make a resurgence. I used it on Debian back before ext3 had become mainstream. At the time, it seemed solid technology but its Linux future was (then) in doubt because it was a huge patchset.
me too. I went from ext2 to XFS, then briefly to btrfs and then ZFS. At the time, XFS was **THE** rock-solid reliable journalling filesystem for linux, with features unmatched by any of the mainline filesystems. Nowadays I use a mixture of ext4 and ZFS - ext4 for /boot partitions, ZFS for everything else (even for the rootfs on most systems nowadays). On some systems I still use XFS for the rootfs, but only because I haven't bothered or have no compelling reason to convert them to root on ZFS. The only reason I use ext4 for /boot is because it will make disaster recovery easier if that's ever needed - e.g. so that I can have grub entries (memdisk) to load a rescue ISO with ZFS support. In other words, the usual reason for having a separate /boot partition. There's no reason to have /boot on XFS - it's too small to benefit from any of the features of XFS that make it superior to ext4. There's no particular reason not to use XFS for /boot, either (except maybe greater overhead - i'd have to check to be sure and it's not worth the time it would take me to google it). On Thu, Aug 17, 2017 at 01:37:24PM +1000, Tim Connors wrote:
Both XFS and btrfs enthusiastically like to silently throw any data written in the past 5 days on the floor when there's a power failure/kernel panic, so there's that commonality.
That's always been a false claim about XFS. The truth is far simpler, and far more reasonable. If there's a power failure or similar crash *while there is unsynced data in the write cache*, then after a reboot, if the crash circumstances were just right (or maybe "just wrong") then XFS can return a block of NUL bytes rather than whatever random garbage might have been in that unwritten block at the time. This confuses people because they see all those ugly NULs (e.g. embedded in their log file) and wonder WTF they're there. This is no worse, and arguably better (because silent corruption is worse than visible corruption), than what some other file systems do (which is return the random garbage that happened to be in the sector) - but in both cases, the data that you *wanted* written to the disk is gone because it hadn't actually been written yet at the time of the crash. This, BTW, is a race condition that's unavoidable by software alone (very fast non-volatile write cache mitigates a LOT), with varying results depending on the order of operations - e.g. whether metadata is written before data or vice-versa and also on whether journal/metadata is synced before or after data. Whichever way that is done, there's always going to be some risk that either or both will be lost in a power-failure or similar catastrophic failure craig -- craig sanders <cas@taz.net.au>

On Thursday, 17 August 2017 3:16:49 PM AEST Craig Sanders via luv-main wrote:
On Thu, Aug 17, 2017 at 01:37:24PM +1000, Tim Connors wrote:
Both XFS and btrfs enthusiastically like to silently throw any data written in the past 5 days on the floor when there's a power failure/kernel panic, so there's that commonality.
That's always been a false claim about XFS.
That's assuming he really means 5 seconds not 5 days. If he really meant 5 days then I've never seen evidence to support such a claim.
If there's a power failure or similar crash *while there is unsynced data in the write cache*, then after a reboot, if the crash circumstances were just right (or maybe "just wrong") then XFS can return a block of NUL bytes rather than whatever random garbage might have been in that unwritten block at the time.
This confuses people because they see all those ugly NULs (e.g. embedded in their log file) and wonder WTF they're there.
Of course the real issue if they have such problems is that an application didn't call fsync() or fdatasync() when it should have OR the application isn't designed for data to be synchronised. For some tasks such as compiling source code you don't want the overhead of fdatasync() and you can just run "make clean ; make all" if you had a power failure - with the recent work on reproducable builds you should even get a binary idential result. MTAs are pretty good about calling the sync family of syscalls and you shouldn't expect problems there. I've seen MySQL have problems on all filesystems, but if you use MySQL you really should have good backups. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=430958 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=588254 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=578635 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=577756 In 2007 I reported a bug against dpkg because it wasn't calling fsync() or fdatasync() and was subject to data loss (I had proven that it had the same bug that was in rpm and had caused data loss on SLES clusters). I've given links to some of the bug reports linked to this. I think that fixing #430958 has involved more developer time and user complaints than any other bug I ever filed in any software, it has also probably involved more user problems when it wasn't fixed than any other bug I reported - but this isn't obvious as the result is just applications crashing for no good reason. When an application writes a new file and doesn't call sync a crash after the write can in practice give the same result as a crash immediately before the write - you lose the data. When an application appends to a file just before a crash you can end up with the file longer but with zeros at the end on some filesystems (I think it's just Ext* and XFS NOT BTRFS) but again having a few nulls in your log file isn't a major problem and you just lose the last write. The real problem is where an application overwrites an existing file before a crash and you can end up with a merge of the data from the 2 versions of the file, and if that's a compressed file (like a Libre Office file) it means you won't get much back. I would hope that applications like Libre Office would write a new temporary file, call fdatasync(), and then rename the temporary file over the old file. I'm sure that lots of programs don't do that and I could probably file a dozen bug reports in a day if I wanted to test things out. The real benefit of BTRFS in this regard is that it allows easy snapshotting. If Libre Office does the wrong thing in this regard and one of my workstations crashes at an inconvenient time then I can get the old version of the file from a snapshot that cron made. ZFS also allows the same snapshot functionality in this regard, but it's a bit harder to manage IMHO. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 17 Aug. 2017 16:02, "Russell Coker via luv-main" <luv-main@luv.asn.au> wrote: On Thursday, 17 August 2017 3:16:49 PM AEST Craig Sanders via luv-main wrote:
On Thu, Aug 17, 2017 at 01:37:24PM +1000, Tim Connors wrote:
Both XFS and btrfs enthusiastically like to silently throw any data written in the past 5 days on the floor when there's a power failure/kernel panic, so there's that commonality.
That's always been a false claim about XFS
Ah, my experience deceived me. That's assuming he really means 5 seconds not 5 days. If he really meant 5 days then I've never seen evidence to support such a claim. Correct, I edit a file in an editor, 5 days later the power fails. 3 days later, I come back and my file is one big fat 0 byte file. fdatasync() regardless, that's an awfully large time to forget to flush the write cache. Why do I bring no evidence of this? Well it's awfully hard to reproduce random data loss. And after the first few times it happens you realise you're dealing with a basket case, reinstall on ext4 and move on with your life (my fileserver is ZFS actually - you'll be amused by the number of silent data corruption bugs it's had despite its reputation -- such as incorrect sparse hole calculation on ZFS send/recv).

On 17 August 2017 1:37:24 pm AEST, Tim Connors via luv-main <luv-main@luv.asn.au> wrote:
Both XFS and btrfs enthusiastically like to silently throw any data written in the past 5 days on the floor when there's a power failure/kernel panic, so there's that commonality.
See O_PONIES. Applications not using the API correctly to safely store data to disk are the problem. There's mitigation in XFS for a number of years now. -- Sent from my Android device with K-9 Mail. Please excuse my brevity.

On Fri, 18 Aug 2017, Stewart Smith wrote:
On 17 August 2017 1:37:24 pm AEST, Tim Connors via luv-main <luv-main@luv.asn.au> wrote:
Both XFS and btrfs enthusiastically like to silently throw any data written in the past 5 days on the floor when there's a power failure/kernel panic, so there's that commonality.
See O_PONIES. Applications not using the API correctly to safely store data to disk are the problem.
There's mitigation in XFS for a number of years now.
Yes I know all about libeatmydata, but you'll agree the race condition doesn't extend to 5 days after close(), right?

On Fri, Aug 18, 2017, at 05:58 PM, Tim Connors wrote:
On 17 August 2017 1:37:24 pm AEST, Tim Connors via luv-main <luv-main@luv.asn.au> wrote:
Both XFS and btrfs enthusiastically like to silently throw any data written in the past 5 days on the floor when there's a power failure/kernel panic, so there's that commonality.
See O_PONIES. Applications not using the API correctly to safely store data to disk are the problem.
There's mitigation in XFS for a number of years now.
Yes I know all about libeatmydata, but you'll agree the race condition doesn't extend to 5 days after close(), right?
It could, it entirely depends on what you have configured... you can make the kernel *very* unwilling to write things back to disk. 5 days sounds long, and I haven't heard of anything *that* long, but I wouldn't be surprised if you could configure the kernel and run workloads in a way that it could happen..

Quoting russell@coker.com.au (russell@coker.com.au):
Last time I checked XFS had no support for reducing the size of a filesystem.
Correct.
But if Stratis is going to use multiple XFS filesystems to compare with the multiple ZFS mount points or BTRFS subvols then it will be a massive problem.
The design paper claims online grow abilities adequately meets their needs. I've only just now skim-read that paper, so I cannot comment.
Stratis is aiming for a version 1.0 release next year, and version 3.0 is aimed at having ZFS feature parity. That's not good for all the people who need ZFS features today!
Welcome to the real world of software development, eh? RH aren't going to ship ZFS unless Oracle Corp. issue a licence exception (alongside the CDDL terms), and that's not going to happen. (Well, RH could ship FUSE_ZFS, but they aren't going to do that either, for reasons of performance.)
XFS has no support for checksums that compares to ZFS and BTRFS. To do it properly you need to do it in the filesystem.
Whitepaper section 10.2.2 et seq. talks about their plans in this area.

On Thursday, 17 August 2017 4:47:16 AM AEST Rick Moen via luv-main wrote:
aimed at having ZFS feature parity. That's not good for all the people who need ZFS features today!
Welcome to the real world of software development, eh?
RH aren't going to ship ZFS unless Oracle Corp. issue a licence exception (alongside the CDDL terms), and that's not going to happen.
(Well, RH could ship FUSE_ZFS, but they aren't going to do that either, for reasons of performance.)
Well in the real world ZFS on Ubuntu is working well. I prefer Debian but Ubuntu has better support for ZFS due to different legal advice. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Quoting russell@coker.com.au (russell@coker.com.au):
Well in the real world ZFS on Ubuntu is working well. I prefer Debian but Ubuntu has better support for ZFS due to different legal advice.
Different and quite clearly wrong, in my view. (No, I'm not an attorney, but I have been a major participant in OSI's licence evaluations for some decades, and made a particular study of such things.) In my view, Canonical are willful copyright violators and are staking a great deal on a guess that kernel stakeholders are not going to haul them into court, where they would very likely lose in a major way, be ordered to pay a great deal in monetary damages, and be enjoined against further violation. If such an action were brought under USA copyright law, and Canonical were to lose, the basic level of statutory damages would be between US $750 and US $30,000, depending on the facts of the case and the court's mood (and the seriousness of the infringing act, and the infringer's financial net worth), but if willfulness is proved, then plaintiff would also be awarded up to US $150,000 in additional statutory damages. That's aside from, in addition to, any compensatory aka 'actual' damages plus profits made by the infringer.

On Thursday, 17 August 2017 9:49:31 PM AEST Rick Moen via luv-main wrote:
In my view, Canonical are willful copyright violators and are staking a great deal on a guess that kernel stakeholders are not going to haul them into court, where they would very likely lose in a major way, be ordered to pay a great deal in monetary damages, and be enjoined against further violation.
Oracle has not chosen to persue any action against Canonical. I presume that a Canonical lawyer would have sent a letter to Oracle saying "we believe this to be legitimate, if you believe otherwise please contact us before we release" which would limit Oracle's ability to sue them. In any case it's not a problem for people who use Ubuntu. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Quoting russell@coker.com.au (russell@coker.com.au):
Oracle has not chosen to persue any action against Canonical.
More to the point, so far, so have the Linux kernel coders. But the copyright violation is real, and Canonical risk either set of stakeholders filing and getting them severely sanctioned and that practice terminated at any time.
In any case it's not a problem for people who use Ubuntu.
I personally don't like dealing with companies that indulge shady business practices that violate the ethics of the open source community, and this isn't the first time Canonical have done so. Moreover, I personally eschew dealings with firms inclined to make potentially fatal legal mistakes. As to customer legal liability, probably as you say, but I wouldn't rule that out. https://www.law.cornell.edu/wex/contributory_infringement

On Friday, 18 August 2017 4:30:05 AM AEST Rick Moen via luv-main wrote:
Quoting russell@coker.com.au (russell@coker.com.au):
Oracle has not chosen to persue any action against Canonical.
More to the point, so far, so have the Linux kernel coders. But the copyright violation is real, and Canonical risk either set of stakeholders filing and getting them severely sanctioned and that practice terminated at any time.
What rights do the Linux kernel coders have in this regard?
In any case it's not a problem for people who use Ubuntu.
I personally don't like dealing with companies that indulge shady business practices that violate the ethics of the open source community, and this isn't the first time Canonical have done so. Moreover, I personally eschew dealings with firms inclined to make potentially fatal legal mistakes.
As to customer legal liability, probably as you say, but I wouldn't rule that out. https://www.law.cornell.edu/wex/contributory_infringement
If you knowingly infringe then that's the case. If you believe that Canonical and Oracle have sorted things out then you are clear. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Quoting russell@coker.com.au (russell@coker.com.au):
What rights do the Linux kernel coders have in this regard?
Copyright title conferring ownership of the abstract right of distribution of derivative works. If you're going to go around alleging that in-kernel filesystems are not derivative works of the Linux kernel, good luck with that. To quote a saying from Damon Runyon, riffing off Ecclesiastes 9:11, 'The race is not to the swift, nor the battle to the strong..., but that's the way to bet.'
If you knowingly infringe then that's the case. If you believe that Canonical and Oracle have sorted things out then you are clear.
In one of the two USA copyright cases commonly cited for contributory copyright infringement, Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd., 545 U.S. 913 (2005), respondent Grokster was found to have actual knowledge of infringement. However, in the other case commonly cited, Sony Corp. v. Universal City Studios, Inc., 464 U.S. 417 (1984), the court found Sony to have had 'constructive knowledge', which is to say not actual knowledge but circumstances where Sony should have known. So, no.

I wrote:
Quoting russell@coker.com.au (russell@coker.com.au):
What rights do the Linux kernel coders have in this regard?
Copyright title conferring ownership of the abstract right of distribution of derivative works.
If you're going to go around alleging that in-kernel filesystems are not derivative works of the Linux kernel, good luck with that. To quote a saying from Damon Runyon, riffing off Ecclesiastes 9:11, 'The race is not to the swift, nor the battle to the strong..., but that's the way to bet.'
If you knowingly infringe then that's the case. If you believe that Canonical and Oracle have sorted things out then you are clear. ^^^^^^^^^^^^^^^^^^^^
In one of the two USA copyright cases commonly cited for contributory copyright infringement, Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd., 545 U.S. 913 (2005), respondent Grokster was found to have actual knowledge of infringement. However, in the other case commonly cited, Sony Corp. v. Universal City Studios, Inc., 464 U.S. 417 (1984), the court found Sony to have had 'constructive knowledge', which is to say not actual knowledge but circumstances where Sony should have known.
And, yet again, for reasons that passeth understanding, you chose to speak as if violating the GPLv2 licence terms of the Linux kernel doesn't matter, or the authors of the Linux kernel don't own copyright title. (Or that in-kernel filesystem drivers aren't derivative works of the kernel, and again, _good luck_ with that argument.) To be clear, I very much doubt that the mainline kernel authors are likey to go around suing small businesses and individual users for contributory (or other) copyright violation, especially if they don't -distribute- the infringing derivative work, which is the problem with Ubuntu's copyright violation. But users who participate in this infringement of the kernel coders' licensing terms are free to feel a bit sleazy, and IMO ought to.

On 17 August 2017 1:22:09 pm AEST, Russell Coker via luv-main <luv-main@luv.asn.au> wrote:
XFS has no support for checksums that compares to ZFS and BTRFS. To do
XFS currently does metadata checksums. There's work to get data checksums, but that's a larger on disk format change. -- Sent from my Android device with K-9 Mail. Please excuse my brevity.

On Friday, 18 August 2017 2:52:20 PM AEST Stewart Smith wrote:
On 17 August 2017 1:22:09 pm AEST, Russell Coker via luv-main <luv- main@luv.asn.au> wrote:
XFS has no support for checksums that compares to ZFS and BTRFS. To do
XFS currently does metadata checksums.
There's work to get data checksums, but that's a larger on disk format change.
My understanding is that they just do checksums on blocks of metadata so if a block is corrupted that will be noticed. If a write to a block is missed and an old version of the block remains will that be noticed? If 2 blocks are written such that block 1 needed to be written first for correct behavior but block 2 remained committed after a power failure but block 1 wasn't will that be noticed? Writing a new copy of all the metadata up to the root of the filesystem as ZFS and BTRFS do) is the obvious way of solving this. But solving it by journaling everything and having checksums is a viable option too. BTRFS has support for "dup" metadata so if one copy is corrupt the other can be used. ZFS has a "copies=" option for filesystems to allow multiple copies of data. However many copies you have of data there will be one more copy of metadata in addition to whatever RAID options you might be using. A filesystem that has only a single copy of metadata will lose data if there is corruption. It's good to flag errors, but if you can't fix them the benefits are small. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/
participants (7)
-
Andrew Spiers
-
Craig Sanders
-
Rick Moen
-
Russell Coker
-
Steve Roylance
-
Stewart Smith
-
Tim Connors