background fsck service - any tools?

Is there any tool that will unmount a filesystem and fsck say once every 2-3 months to avoid having the massive slow fsck on the reboot every 6-12 months? If not I am thinking of a cron job that tries to umount the filesystem in the middle of the night, run a basic fsck and if all goes well remounts it mailing the results to root. I have a modest root/usr partitions but the larger other partitions only used for recording TV take 30+ minutes to do the occasional fsck and that always is demanded sometimes very inconveniently on a reboot. Andrew

Andrew Worsley <amworsley@gmail.com> wrote:
Is there any tool that will unmount a filesystem and fsck say once every 2-3 months to avoid having the massive slow fsck on the reboot every 6-12 months?
I avoid the problem altogether by not using ext2/3/4 as my file system. The Cron job might be your easiest solution.

On Tue, Nov 06, 2012 at 02:05:25PM +1100, Jason White wrote:
Andrew Worsley <amworsley@gmail.com> wrote:
Is there any tool that will unmount a filesystem and fsck say once every 2-3 months to avoid having the massive slow fsck on the reboot every 6-12 months?
I avoid the problem altogether by not using ext2/3/4 as my file system.
yep, avoiding that idiocy is one of the minor benefits of using a different filesystem, like xfs (or btrfs or zfs, although zfs is probably far too resource-hungry for a little media server) however, if you want to stick with ext2 or 3 or 4, you can use tune2fs to tell ext2/3/4 not to do the stupid mount-count and/or interval based fscks. e.g. to disable both: tune2fs -i 0 -c 0 /dev/xxxx where /dev/xxxx is the device node for the filesystem you're tuning. i tend to disable both when i use ext2/3/4 because I reboot so infrequently that they cause a lengthy fsck on every reboot...which is tedious and annoying and really shouldn't be necessary (if it is necessary, then the filesystem is too broken to be worth using)
The Cron job might be your easiest solution.
maybe. depends on what files might be opened on that filesystem at the time the cron job runs. or what processes might try to open a file while the fs is unmounted. also, unless you use tune2fs to set '-c 0', every unmount and mount will increase the mount count...which will also trigger an automatic fsck at boot. craig ps: i converted my mythtv box over to zfs a few months back and it is working well...but it has a 6-core AMD Phenom-II 1090T with 8GB RAM and 4 x 2 TB drives. it doesn't do much else except record, transcode, and play DVB recordings. i mainly converted because i was tired of having myth1, myth2, myth3, and myth4 directories and having to manually shuffle files around if anything went wrong. i've effectively lost the capacity of one of the drives, but raid-z makes it easy to replace a drive if one of them dies, and will make it easier to upgrade to 3 or 4TB drives when they become cheap enough to be worth while (which, with luck, will be before the drives get old enough to need replacing anyway :) -- craig sanders <cas@taz.net.au>

Craig Sanders <cas@taz.net.au> writes:
On Tue, Nov 06, 2012 at 02:05:25PM +1100, Jason White wrote:
Andrew Worsley <amworsley@gmail.com> wrote:
Is there any tool that will unmount a filesystem and fsck say once every 2-3 months to avoid having the massive slow fsck on the reboot every 6-12 months?
Use a journalled filesystem.
I avoid the problem altogether by not using ext2/3/4 as my file system.
That's an integrity check. You can skip if it you really want to:
tune2fs -i 0 -c 0 /dev/xxxx
The Cron job might be your easiest solution.
I tried this, but it whinges constantly about leaking fds, because lvm is stupid. I haven't gotten around to cleaning it up. $ cat /etc/cron.weekly/avoid-fsck #!/bin/bash ## e2fsck is usually triggers after enough mounts, or enough time ## without a fsck. For always-on servers, the latter usually means ## that after an outage, you have to wait a few hours for the fscks to ## finish before normal operation resumes. This is especially ## annoying as it cannot be skipped on Ubuntu 10.04 servers. ## ## Instead we make an LVM snapshot of each ext LV, and if *that* fscks ## OK, we conclude that there were no errors and we set the snapshot's ## origin to indicate that a fsck has taken place. # Boilerplate prelude ################################################ set -eEu set -o pipefail trap 'echo >&2 "$0: unknown error"' ERR while getopts d opt do case "$opt" in (d) set -x;; ('?') exit 1;; esac done shift $((${OPTIND:-1}-1)) # Begin code ######################################################### # Silently succeed if the necessary tools aren't available, as that # strongly indicates this script is not needed on this host. { which lvs && which tune2fs; } &>/dev/null || exit 0 # Output of e2fsck is desirable iff e2fsck had a genuine issue. It # may be arbitrarily long, so a temporary file is more appropriate # than a simple bash variable. f="`mktemp -t avoid-fsck.XXXXXX`" trap 'rm -f "$f"' EXIT #lvs | sed 1d | grep '[-o]wi-ao' | # list of LVs (FIXME: dodgy regex). lvs --noheadings --separator , --options lv_name,vg_name,origin | while IFS="$IFS," read lv vg origin do # Skip snapshots. test -z "$origin" || continue # Skip non-ext LVs. tune2fs -l "/dev/$vg/$lv" &>/dev/null || continue # Cleanup any mess left over from a previous run. test ! -e "/dev/$vg/fsck_$lv" || >/dev/null lvremove -f "/dev/$vg/fsck_$lv" # NB: 4G should be plenty of space for the COW, since we are only # going to keep it around long enough to do a fsck. >/dev/null lvcreate --snapshot "/dev/$vg/$lv" --name "fsck_$lv" --size 4G if nice ionice -c3 e2fsck -p "/dev/$vg/fsck_$lv" >"$f" || test $? -eq 1 -o $? -eq 4 -o $? -eq 5 # Ignore "safe" statuses. then >/dev/null tune2fs -C0 -Tnow "/dev/$vg/$lv" else >&2 echo "e2fsck -p /dev/$vg/fsck_$lv failed" >&2 cat "$f" fi >/dev/null lvremove -f "/dev/$vg/fsck_$lv" done

On 09/11/12 11:16, Trent W. Buck wrote:
Use a journalled filesystem.
As long as it's not ext4 with journal checksums.. :-) https://lwn.net/Articles/521803/ Now fixed in mainline and backported to 3.4.18 & 3.6.6. cheers, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

On Tue, 6 Nov 2012, Andrew Worsley <amworsley@gmail.com> wrote:
I have a modest root/usr partitions but the larger other partitions only used for recording TV take 30+ minutes to do the occasional fsck and that always is demanded sometimes very inconveniently on a reboot.
If you have an Ext* filesystem that is used for storing large files then you should make it Ext4 (which has some optimisations for fsck on unallocated space) and you should have a number of Inodes that is a good match for the number of files. But there are other filesystems that offer advantages. XFS has traditionally been one of the best options for large files and generally needs no fsck. BTRFS is designed to never need a fsck and in terms of practically never needing a fsck should be better than XFS. ZFS is good, but maybe not for a little PVR. Finally there's nothing stopping you from just increasing the amount of time between fsck runs. If a system is running 24*7 then it probably won't need a fsck every year. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Tue, 6 Nov 2012, Andrew Worsley <amworsley@gmail.com> wrote:
I have a modest root/usr partitions but the larger other partitions only used for recording TV take 30+ minutes to do the occasional fsck and that always is demanded sometimes very inconveniently on a reboot.
If you have an Ext* filesystem that is used for storing large files then you should make it Ext4 (which has some optimisations for fsck on unallocated space) and you should have a number of Inodes that is a good match for the number of files.
But there are other filesystems that offer advantages. XFS has traditionally been one of the best options for large files and generally needs no fsck. BTRFS is designed to never need a fsck and in terms of practically never needing a fsck should be better than XFS. ZFS is good, but maybe not for a little PVR.
Finally there's nothing stopping you from just increasing the amount of time between fsck runs. If a system is running 24*7 then it probably won't need a fsck every year.
Is a routine fsck on ext* filesystems still recommended, or just done because "that's the way we've always done it"? James

On Tue, 6 Nov 2012, James Harper <james.harper@bendigoit.com.au> wrote:
Is a routine fsck on ext* filesystems still recommended, or just done because "that's the way we've always done it"?
It's been a while since I've seen anything noteworthy happen on such a fsck. But I guess it depends on how paranoid you are. On Tue, 6 Nov 2012, James Harper <james.harper@bendigoit.com.au> wrote:
I wonder if the following would be valid, assuming you are using LVM:
1. take an LVM snapshot 2. fsck the LVM snapshot 3. if the fsck of the snapshot is good, reset the mount count and/or time last checked interval of the origin fs. If the fsck was bad, do the unmount and fsck (or mark the fs as requiring an fsck next boot if the fs cannot be unmounted) 4. email the results
Yes, that's a good option. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Tue, 6 Nov 2012, James Harper wrote:
On Tue, 6 Nov 2012, Andrew Worsley <amworsley@gmail.com> wrote:
I have a modest root/usr partition Finally there's nothing stopping you from just increasing the amount of time between fsck runs. If a system is running 24*7 then it probably won't need a fsck every year.
Is a routine fsck on ext* filesystems still recommended, or just done because "that's the way we've always done it"?
James
I have been using Linux now for nearly 20 years, for most of that time I have used (and still use) ext2 or ext3. During the twenty years I have never seen one of these "sceduled" fsck's produce any errors caused by a failure in the drive or the file system. Errors have been produced a small number of times (like around 2 or 3 times in the 20 year period) but they have been caused by externel influences, like for instance a failing 12V line on the power supply, this has happened to me twice, both times the problem was high internal resistance in the 12V lines filter condensors. Such a failure will of course effect all drives no matter what the file system. So in the end one could probably turn it off as has already been susgested and lose no sleep over it. I have in fact set the number of mount counts to a check to values between 70 and 140, as each ssytem is not used daily (i have three) the fsck's come around about once every 6 to 12 months. As the data stored is mostly hand entered historical engineering data plus some music (Oh also my debian repositiories) not much storage space is used so these fscks are not to bad. Oh bye the way.......... My 20th aniversary of using linux will come up around June July next year, How many people remember SLS and Ygdrasil? Lindsay

On 06/11/12 18:09, Lindsay Sprinter wrote:
My 20th aniversary of using linux will come up around June July next year, How many people remember SLS and Ygdrasil?
Yup, and the MCC Interim distro.. ;-) -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

Oh by the way..........
My 20th aniversary of using linux will come up around June July next year, How many people remember SLS and Ygdrasil?
Yes, although my first distro was Slackware (I think) in early '94. I downloaded it onto a bunch of 5+1/4" 1.2MB disks formatted as 1.44MB the year I started uni :) James

On 06.11.12 18:09, Lindsay Sprinter wrote:
I have been using Linux now for nearly 20 years, for most of that time I have used (and still use) ext2 or ext3. During the twenty years I have never seen one of these "sceduled" fsck's produce any errors caused by a failure in the drive or the file system.
+1 Running ext3: I turn my computer off at night, so 30 boots per month == one fsck per month. (I do other things while it's coming up. It makes a noise when it's ready.) The scheduled fscks haven't found anything in a decade. Even when the motherboard died, I replaced it, and resumed operations with the same drives. (Chucked the power supply - the electrolytics in them don't improve with time.) And when power in the whole street goes down, it is an _unscheduled_ fsck, triggered by detection of "unclean shutdown", which performs a more worthwhile check, anyway. And yes, it's then been necessary to fsck with a backup superblock, to get going again. But nothing's been lost so far, and lost+found is still empty, so there were not even stray inodes. Erik -- At the moment Greenland is rising in some places by three centimeters per year, but the speed is accelerating. If the whole icelayer disappears, Greenland will rise by about one kilometer. - My translation of a paragraph in http://jyllands-posten.dk/nyviden/article4775333.ece Noticed a few earthquakes in recent years?

On Tue, 6 Nov 2012, Lindsay Sprinter wrote:
On Tue, 6 Nov 2012, James Harper wrote:
Is a routine fsck on ext* filesystems still recommended, or just done because "that's the way we've always done it"?
James
I have been using Linux now for nearly 20 years, for most of that time I have used (and still use) ext2 or ext3. During the twenty years I have never seen one of these "sceduled" fsck's produce any errors caused by a failure in the drive or the file system.
I choose ext2/3 as the most stable Linux filesystem for many years, based on my experience and other reports(e.g. the XFS-related threads on this mailinglist and others did not give me too much confidence) At least 2007-2010 (maybe starting earlier) "tune2fs -i 0 -c 0 <fs>" was part of the build because I did not see a benefit in running regular fscks.
My 20th aniversary of using linux will come up around June July next year, How many people remember SLS and Ygdrasil?
I bought my first PC at home Northern summer 1993, a AMD 386 40 MHz system with 4 MB Ram (and a black&white 14" monitor). Finally "Unix" at home:-) I downloaded a bunch of 3 1/2" floppies from ther FTP server at Uni Rostock. I am not sure anymore whether it was SLS or the first Slackware release. It came with "b[1-?]" (base system) disks, "x[1-?]" (X11) disks, and I believe there were two or three more series (with different letters) but I don't remember which. One was a "n" (network) series, supplying TCP/IP utilities, I think. Cheers Peter

Quoting Peter Ross (Peter.Ross@bogen.in-berlin.de):
On Tue, 6 Nov 2012, Lindsay Sprinter wrote:
On Tue, 6 Nov 2012, James Harper wrote:
Is a routine fsck on ext* filesystems still recommended, or just done because "that's the way we've always done it"?
James
I have been using Linux now for nearly 20 years, for most of that time I have used (and still use) ext2 or ext3. During the twenty years I have never seen one of these "sceduled" fsck's produce any errors caused by a failure in the drive or the file system.
I choose ext2/3 as the most stable Linux filesystem for many years, based on my experience and other reports(e.g. the XFS-related threads on this mailinglist and others did not give me too much confidence)
At least 2007-2010 (maybe starting earlier) "tune2fs -i 0 -c 0 <fs>" was part of the build because I did not see a benefit in running regular fscks.
My 20th aniversary of using linux will come up around June July next year, How many people remember SLS and Ygdrasil?
I bought my first PC at home Northern summer 1993, a AMD 386 40 MHz system with 4 MB Ram (and a black&white 14" monitor).
Finally "Unix" at home:-) I downloaded a bunch of 3 1/2" floppies from ther FTP server at Uni Rostock.
I am not sure anymore whether it was SLS or the first Slackware release. It came with "b[1-?]" (base system) disks, "x[1-?]" (X11) disks, and I believe there were two or three more series (with different letters) but I don't remember which. One was a "n" (network) series, supplying TCP/IP utilities, I think.
That's Slackware. When I first built a Linux system in '93, I wasn't yet aware of Slackware or SLS, so on account of lack of awareness a friend and I (who had both been trying various *ixes: AT&T System V release 3.22, Novell UnixWare 2.0, 386BSD 0.1) downloaded H.J. Lu's three-disk Linux Base System floppy images from tsx-11.mit.edu or sunsite.unc.edu (I forget which) and used those to build up systems from source tarballs. Here's a copy of the docs. http://www.ibiblio.org/pub/historic-linux/ftp-archives/tsx-11.mit.edu/Oct-07... And hey! Here are the MINIX-formatted flopy images: http://www.ibiblio.org/pub/historic-linux/ftp-archives/sunsite.unc.edu/Nov-0...

On Wed, Nov 07, 2012 at 10:33:33AM +1100, Peter Ross wrote:
I bought my first PC at home Northern summer 1993, a AMD 386 40 MHz system with 4 MB Ram (and a black&white 14" monitor).
The first "PC" I ever bought was an XT clone in 1982. IIRC it cost about $1500 for 640K RAM and dual 360K floppies, with a hercules graphics card and an amber monitor (the herc card didn't do colour, but the text quality was vastly superior to what a CGA card was capable of). The 20MB hard disk I bought a few months later cost another $1000. enormous, 20MB was more than 55 floppies worth of data. (prior to that, i had TRS-80s). My first linux system had an 80386 CPU (an Intel CPU originally, later upgraded to AMD 386-40) with 4MB like yours, with an EGA card and monitor and, IIRC, a 2nd-hand SCSI 320MB hard disk and controller card. Even with 0.x kernels, compiling the kernel was an overnight job. I remember getting the 4MB RAM for that motherboard at the bargain price of $1000 (and that WAS a good price at the time). 4GB, or 1024 times as much RAM, costs $18 today. I think I bought the parts for and built the machine in 1990. Ran ms-dos and desqview on it at first (popular choice at the time for fidonet), then tried OS/2, and installed linux in '91 (partly because, unlike OS/2, it supported serial terminals and uucp worked properly, and i was in the process of switching from fidonet to APANA). I first installed linux on a 50MB partition to try it out...and two weeks later completely reformatted the disk, converted entirely to linux, and never looked back.
Finally "Unix" at home:-) I downloaded a bunch of 3 1/2" floppies from ther FTP server at Uni Rostock.
I am not sure anymore whether it was SLS or the first Slackware release. It came with "b[1-?]" (base system) disks, "x[1-?]" (X11) disks, and I believe there were two or three more series (with different letters) but I don't remember which. One was a "n" (network) series, supplying TCP/IP utilities, I think.
sounds like early-90s slackware. i vaguely recall downloading and installing those floppies. I started with MCC and then SLS (or maybe the other way around, can't remember exactly), switched to slackware, then later switched to debian in '94....have used that by preference ever since. i've used other distros when i had to (e.g. RH or SuSE at work) but if i have the choice, i'll always use debian. hmmmm....that's right. it must have been MCC -> SLS -> Slackware because the reason i switched to slackware was that it was just like SLS but with lots of bugs fixed. craig -- craig sanders <cas@taz.net.au>

On Wed, Nov 07, 2012 at 02:50:01PM +1100, Craig Sanders wrote:
I think I bought the parts for and built the machine in 1990. Ran ms-dos and desqview on it at first (popular choice at the time for fidonet), then tried OS/2, and installed linux in '91 [...]
that can't be right. it must have been 1991 I built the machine and '92 i installed linux. i was definitely running MCC Linux before Oct '92 (when my daughter was born) craig -- craig sanders <cas@taz.net.au> BOFH excuse #163: no "any" key on keyboard

On Wed, 7 Nov 2012, Craig Sanders wrote:
On Wed, Nov 07, 2012 at 10:33:33AM +1100, Peter Ross wrote: The first "PC" I ever bought was an XT clone in 1982. IIRC it cost about $1500 for 640K RAM and dual 360K floppies, with a hercules graphics card and an amber monitor (the herc card didn't do colour, but the text quality was vastly superior to what a CGA card was capable of).
The prize does not sound right: I found a quote from the 17th April 1986: 1 M24-20M-640 for $5500 Olivetti M24 Computer Bus Converter (?) 640K RAM 20MB RAM 1x320K FDD IBM Keyboard Colour Screen 8087 Maths Co Processor MS-DOS Operating System 1 M24-20M-640 SP for $7000 Olivetti M24 Computer Bus Converter (?) 640K RAM 20MB RAM 1x320K FDD Olivetti Keyboard Colour Screen 8087 Maths Co Processor MS-DOS Operating System I cannot spot the difference.. I don't know why one is a "SP" version and $1500 more expensive. I don't think the keyboard was worth that much;-) Anyway, with 1982 would you have been ahead of time: according to Wikipedia "IBM PC/XT (model 5160)" was released March 29 1983 ("http://en.wikipedia.org/wiki/IBM_Personal_Computer_XT) The first IBM/PC came with 3 OS (PC/DOS, CP/M-86 and UCSD p-System)! I actually did not know that.. 1987 I learnt Intel assembler using CP/M-86 (on an East-German PC-clone, a A7100, with a Soviet copy of an Intel 8086 in it, the K 1810 WM 86, we called the "washing machine" CPU;-) Actually, when we opened few, we always found a Siemens 8086 clone.. so I am not sure whether the K 1810 WM 86 really existed (or was just made up to render the COCOM restrictions useless). The "COCOM" list prevented the export of computer technology into the Communist countries. But still we had a VAX 11-780 at the uni, somehow smuggled from the West. That beast did not fit easily into a car.. I had a book with U880 (East German Z80 clone) and U8000 (Z8000) assembler, and UCSD in it. I learnt Assember and Pascal with it but never used UCSD which was a compiler system producing machine-independent bytecode (a "Pascal VM"). I used Turbo Pascal 3.0 on CP/M instead, since 1985, on an East German SCP 1715 (CP/M on Z80 [U880]). We also had am East-German PDP-11 clone in 1985, the K1630. It was running a RSX-11 clone but a colleague installed Unix on it: MUTOS 1600. But I left for uni just then. The "other" world I touched then were the ESER mainframes copying IBM mainframes. When I started, we just got a first terminal to make programming on the mainframe more interactive - we used punch cards then. The terminal was from Romania and came with a Romanian manual.. the production of devices came from various East European states. At uni we had the VAXes I mentioned, again a K1630, and the electrotechnics department had a UDOS system on a U8000[Zilog 8000] machine, a Unix System 7 clone. I just got access to it when the wall came down - and a year later I had DECs and Suns and HP/UX machines:-) So the East was pretty much busy re-enacting the IT progress in the West, a process of re-engineering and re-inventing and sometimes plain stealing. Real inventions were pretty rare in that area, I think.. Successfully sidetracked;-) Cheers Peter

On Wed, Nov 07, 2012 at 03:40:16PM +1100, Peter Ross wrote:
On Wed, 7 Nov 2012, Craig Sanders wrote:
On Wed, Nov 07, 2012 at 10:33:33AM +1100, Peter Ross wrote: The first "PC" I ever bought was an XT clone in 1982. IIRC it cost about $1500 for 640K RAM and dual 360K floppies, with a hercules graphics card and an amber monitor (the herc card didn't do colour, but the text quality was vastly superior to what a CGA card was capable of).
The prize does not sound right: I found a quote from the 17th April 1986:
1 M24-20M-640 for $5500
Olivetti were a name-brand PC, and an expensive one. and not-entirely-compatible (i worked in a few places that had some, lots of stuff didn't run on them) no-name clones were much cheaper, the price difference was far greater than it is between noname clones and name-brand PCs these days.
I cannot spot the difference.. I don't know why one is a "SP" version and $1500 more expensive. I don't think the keyboard was worth that much;-)
likely no-one at the time could either. but people (well, businesses) happily bought them, probably for similar reasons to why people happily pay Apple $100 for extra RAM worth maybe an $10-$15.
Anyway, with 1982 would you have been ahead of time: according to Wikipedia "IBM PC/XT (model 5160)" was released March 29 1983 ("http://en.wikipedia.org/wiki/IBM_Personal_Computer_XT)
might have been '83 or '84 rather than '82. I remember taking that PC with me when i moved to Sydney in '85, and i'd had it for quite a while (before moving out of home, and then to at least two share-houses before i moved to sydney).
The first IBM/PC came with 3 OS (PC/DOS, CP/M-86 and UCSD p-System)! I actually did not know that..
i had PC/DOS and CPM/86. craig -- craig sanders <cas@taz.net.au> BOFH excuse #333: A plumber is needed, the network drain is clogged

On Wed, Nov 7, 2012 at 2:50 PM, Craig Sanders <cas@taz.net.au> wrote:
On Wed, Nov 07, 2012 at 10:33:33AM +1100, Peter Ross wrote:
I bought my first PC at home Northern summer 1993, a AMD 386 40 MHz system with 4 MB Ram (and a black&white 14" monitor).
The first "PC" I ever bought was an XT clone in 1982. IIRC it cost about $1500 for 640K RAM and dual 360K floppies, with a hercules graphics card and an amber monitor (the herc card didn't do colour, but the text quality was vastly superior to what a CGA card was capable of).
The 20MB hard disk I bought a few months later cost another $1000. enormous, 20MB was more than 55 floppies worth of data.
I had an XT with blue monitor cost me around a grand when I got it, I remember going to a "clearance" sale that a supplier in Perth (where I lived) had every year to clear out old and damaged stock, I got a 40MB HDD that had been returned under warranty and was only reading one side of the platter making it a 20MB drive, picked that up for $40 from memory it was an MFM drive

Craig Sanders <cas@taz.net.au> wrote:
I think I bought the parts for and built the machine in 1990. Ran ms-dos and desqview on it at first (popular choice at the time for fidonet), then tried OS/2, and installed linux in '91 (partly because, unlike OS/2, it supported serial terminals and uucp worked properly, and i was in the process of switching from fidonet to APANA). I first installed linux on a 50MB partition to try it out...and two weeks later completely reformatted the disk, converted entirely to linux, and never looked back.
I was a comparative late-comer: my first installation was in 1998 on a new i586 laptop, but by then I already knew what I wanted, having used SunOS to access the Internet as an undergraduate student. It was much better to have a Unix-like system running on my own machine than to connect every day to a remote system via telnet or via a modem. I was fortunate to have the opportunity to learn Unix as a user before embarking on system administration, which at that point was relatively easy as I was comfortable with the shell environment and text editing. People whose first experience of a Unix-like system is on their own machine following a Linux installation have to learn the shell and utilities, text editing and system administration all at once (unless they remain perpetual beginners in a GUI environment whose skills remain at the elementary level, but I'm here interested in discussing those who learn and how they do so rather than those who don't). It might be better to start as I did with a user account, then to learn the shell, Unix utilities and text editing thoroughly, and then to move on to basic administrative tasks, instead of trying to absorb everything in a hurry.

Lindsay Sprinter <zlinw@mcmedia.com.au> writes:
During the twenty years I have never seen one of these "scheduled" fsck's produce any errors caused by a failure in the drive or the file system. Errors have been produced a small number of times but they have been caused by externel influences [...] So in the end one could probably turn it off as has already been suggested and lose no sleep over it.
IIRC I've seen it due to dying HDDs, so I would add an erratum that if you turn off periodic fscks of ext, make sure to turn on periodic SMART checks.

Lindsay Sprinter <zlinw@mcmedia.com.au> writes:
During the twenty years I have never seen one of these "scheduled" fsck's produce any errors caused by a failure in the drive or the file system. Errors have been produced a small number of times but they have been caused by externel influences [...] So in the end one could probably turn it off as has already been suggested and lose no sleep over it.
IIRC I've seen it due to dying HDDs, so I would add an erratum that if you turn off periodic fscks of ext, make sure to turn on periodic SMART checks.
Even if you leave periodic fscks on, still do SMART checks. The two solve fairly different problems. James

I remember this. I got introduced to Linux by a study friend while at Monash in '94, it was slackware (thank you Andrea!). I remember that we tried various other distros and I moved camps to Redhat based distros in '96 (3?). BUT! I kicked out the Windows (NT) based fileserver out of our company in '98 using Redhat 5.2 and Samba 1.X (cant remember exactly the version number) ... I have never ever looked back. Still using Redhat and CentOS (5.9 and 6.3) and Samba (samba3x). Jobst On Tue, Nov 06, 2012 at 06:09:51PM +1100, Lindsay Sprinter (zlinw@mcmedia.com.au) wrote:
On Tue, 6 Nov 2012, James Harper wrote:
On Tue, 6 Nov 2012, Andrew Worsley <amworsley@gmail.com> wrote:
I have a modest root/usr partition Finally there's nothing stopping you from just increasing the amount of time between fsck runs. If a system is running 24*7 then it probably won't need a fsck every year.
Is a routine fsck on ext* filesystems still recommended, or just done because "that's the way we've always done it"?
James
I have been using Linux now for nearly 20 years, for most of that time I have used (and still use) ext2 or ext3. During the twenty years I have never seen one of these "sceduled" fsck's produce any errors caused by a failure in the drive or the file system. Errors have been produced a small number of times (like around 2 or 3 times in the 20 year period) but they have been caused by externel influences, like for instance a failing 12V line on the power supply, this has happened to me twice, both times the problem was high internal resistance in the 12V lines filter condensors. Such a failure will of course effect all drives no matter what the file system.
So in the end one could probably turn it off as has already been susgested and lose no sleep over it. I have in fact set the number of mount counts to a check to values between 70 and 140, as each ssytem is not used daily (i have three) the fsck's come around about once every 6 to 12 months. As the data stored is mostly hand entered historical engineering data plus some music (Oh also my debian repositiories) not much storage space is used so these fscks are not to bad.
Oh bye the way..........
My 20th aniversary of using linux will come up around June July next year, How many people remember SLS and Ygdrasil?
Lindsay _______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main
-- If builders built buildings the way Microsoft wrote programs, then the first woodpecker that came along would destroy civilization. | |0| | Jobst Schmalenbach, jobst@barrett.com.au, General Manager | | |0| Barrett Consulting Group P/L & The Meditation Room P/L |0|0|0| +61 3 9532 7677, POBox 277, Caulfield South, 3162, Australia

On Thu, 15 Nov 2012, Jobst Schmalenbach wrote:
Still using Redhat and CentOS (5.9 and 6.3) and Samba (samba3x).
Did you try Samba 4? There is RC5 out (RC 6 and final release following this year according to plan:-) I am very keen to use it but am a bit shy to try because of the ongoing changes. It looks as the end result may be a mix of "old" samba3 and "new" samba4 binaries, and it may not even clear yet what to choose at the end. The information about support for external LDAP is conflicting. A (probably outdated?) wiki page states that external LDAP will not be supported - while people on the technical mailing list seem to run it and contribute. I also cannot find out whether I can extend the schema of the internal server to suit other needs (e.g. a mail server). Using FreeBSD and ZFS adds to the complexity from my side.. I may give it a try next week if I find the time. I am keen to replace the running samba3 server and establish a new samba4 server running as an AD server as well. Regards Peter

No, I have not tried. We are running a rather big project here (plus all the other ongoing stuff) so I cannot spend some time on this, even if I wanted to. The progress in the Samba4 stuff that would rather be of interest for me is the Group Policy implementation and the full NTFS semantics ... that would make admin so much easier, especially when migrating to Win7. As for the LDAP server ... there are very conflicting messages, indeed. The samba team would need to provide means to turn the "external" LDAP port off, would they? I have an openLDAP server running on the same machine as my current samba3x installation ... but yes, it would be cool if one could use the LDAP backend of the Samba4 server to hook mail/external authentication to it. J On Thu, Nov 15, 2012 at 10:13:34AM +1100, Peter Ross (Peter.Ross@bogen.in-berlin.de) wrote:
On Thu, 15 Nov 2012, Jobst Schmalenbach wrote:
Still using Redhat and CentOS (5.9 and 6.3) and Samba (samba3x).
Did you try Samba 4?
There is RC5 out (RC 6 and final release following this year according to plan:-)
I am very keen to use it but am a bit shy to try because of the ongoing changes. It looks as the end result may be a mix of "old" samba3 and "new" samba4 binaries, and it may not even clear yet what to choose at the end.
The information about support for external LDAP is conflicting. A (probably outdated?) wiki page states that external LDAP will not be supported - while people on the technical mailing list seem to run it and contribute.
I also cannot find out whether I can extend the schema of the internal server to suit other needs (e.g. a mail server).
Using FreeBSD and ZFS adds to the complexity from my side.. I may give it a try next week if I find the time.
I am keen to replace the running samba3 server and establish a new samba4 server running as an AD server as well.
Regards Peter
-- The future isn't what it used to be (it never was). | |0| | Jobst Schmalenbach, jobst@barrett.com.au, General Manager | | |0| Barrett Consulting Group P/L & The Meditation Room P/L |0|0|0| +61 3 9532 7677, POBox 277, Caulfield South, 3162, Australia

Is there any tool that will unmount a filesystem and fsck say once every 2-3 months to avoid having the massive slow fsck on the reboot every 6-12 months?
If not I am thinking of a cron job that tries to umount the filesystem in the middle of the night, run a basic fsck and if all goes well remounts it mailing the results to root.
I have a modest root/usr partitions but the larger other partitions only used for recording TV take 30+ minutes to do the occasional fsck and that always is demanded sometimes very inconveniently on a reboot.
I wonder if the following would be valid, assuming you are using LVM: 1. take an LVM snapshot 2. fsck the LVM snapshot 3. if the fsck of the snapshot is good, reset the mount count and/or time last checked interval of the origin fs. If the fsck was bad, do the unmount and fsck (or mark the fs as requiring an fsck next boot if the fs cannot be unmounted) 4. email the results This would allow you to do it at any time of the day or night, even if you were recording the late late movie. Being a journaling filesystem you shouldn't get any fsck inconsistencies introduced by the "crash consistent" snapshot process. I do this sort of check under Windows which has its own built-in snapshotting, so if C: drive is currently in use (as it always is) chkdsk c: will automatically take a snapshot and do a read-only chkdsk against the snapshot. If any problems are found then a reboot will be arranged, although chkdsk errors on a windows system typically indicate a major hardware problem so further investigation is required. James

Andrew Worsley wrote:
Is there any tool that will unmount a filesystem and fsck say once every 2-3 months to avoid having the massive slow fsck on the reboot every 6-12 months?
Doing that only shifts the costly(timewise) fsck to a time of your choosing, but to do it(fsck) more often achieves no benefit at all. Ext3/4 filesystems do not need defragmenting like FAT32 and friends, the fsck is only a consistency check, as incorrect shutdown procedures can cause filesystem inconsistencies.
If not I am thinking of a cron job that tries to umount the filesystem in the middle of the night, run a basic fsck and if all goes well remounts it mailing the results to root.
I have a modest root/usr partitions but the larger other partitions only used for recording TV take 30+ minutes to do the occasional fsck and that always is demanded sometimes very inconveniently on a reboot.
cheers Robert

Robert Moonen <n0b0dy@bigpond.net.au> writes:
Ext3/4 filesystems do not need defragmenting like FAT32 and friends
Wrong. Under pathological cases, ext will get fragmented quickly: specifically, if you repeatedly fill the filesystem as root, or as any user with the reserved space set to 0. After a couple of dozen times, fsck will report a fragmentation level of 30% or so. The fix for this is in userspace: free up some space, then for each file on the filesystem, read it out and write it back. Something like *untested) find /foo -xdev -type f -exec sh -c 'cp "$0" "$0~" && mv "$0~" "$0"' {} \;

Trent W. Buck <trentbuck@gmail.com> wrote:
Wrong. Under pathological cases, ext will get fragmented quickly: specifically, if you repeatedly fill the filesystem as root, or as any user with the reserved space set to 0. After a couple of dozen times, fsck will report a fragmentation level of 30% or so.
What's the final story about Btrfs fragmentation? There was controversy surrounding it a few years ago, but I don't know what the conclusions were in the end. btrfs has background fragmentation correction as an option, apparently.

On Fri, 9 Nov 2012, Jason White <jason@jasonjgw.net> wrote:
What's the final story about Btrfs fragmentation? There was controversy surrounding it a few years ago, but I don't know what the conclusions were in the end.
The important issue is to determine what your applications are doing with files. The most heavily loaded systems I run at the moment are mail servers. They typically have writes outnumbering reads by a factor of about 9:1. The majority of email is never read from disk, it's either read by the user and deleted while it's still in the cache, copied to an offline-IMAP system while it's still in the cache, or automatically deleted from the spam folder before the user notices that it's there. For such use the way BTRFS defragments writes and fragments reads is a feature not a bug! -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Fri, 9 Nov 2012, Jason White <jason@jasonjgw.net> wrote:
What's the final story about Btrfs fragmentation? There was controversy surrounding it a few years ago, but I don't know what the conclusions were in the end.
The important issue is to determine what your applications are doing with files.
The most heavily loaded systems I run at the moment are mail servers. They typically have writes outnumbering reads by a factor of about 9:1. The majority of email is never read from disk, it's either read by the user and deleted while it's still in the cache, copied to an offline-IMAP system while it's still in the cache, or automatically deleted from the spam folder before the user notices that it's there.
For such use the way BTRFS defragments writes and fragments reads is a feature not a bug!
I keep most of my email, and it's all highly indexed and very quickly searchable. James

James Harper <james.harper@bendigoit.com.au> wrote:
I keep most of my email, and it's all highly indexed and very quickly searchable.
I keep mine too; I maintain maildir-format folders (probably not the best option with XFS as the underlying file system, which is better for large files). Maildir folders are less amenable to corruption and avoid locking issues if accessed remotely via, for example, NFS. I eventually save the maildir folders into a compressed tar file, which is then kept for my own archival purposes.

On 09/11/12 11:28, Trent W. Buck wrote:
The fix for this is in userspace: free up some space, then for each file on the filesystem, read it out and write it back.
Or, if you are using ext4 with extents enabled, use the e4defrag program in e2fsprogs. DESCRIPTION e4defrag reduces fragmentation of extent based file. The file targeted by e4defrag is created on ext4 filesystem made with "-O extent" option (see mke2fs(8)). The targeted file gets more contiguous blocks and improves the file access speed. target is a regular file, a directory, or a device that is mounted as ext4 filesystem. If target is a directory, e4defrag reduces fragmentation of all files in it. If target is a device, e4defrag gets the mount point of it and reduces fragmentation of all files in this mount point. cheers, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC
participants (14)
-
Andrew Worsley
-
Chris Samuel
-
Craig Sanders
-
Erik Christiansen
-
Hiddensoul (Mark Clohesy)
-
James Harper
-
Jason White
-
Jobst Schmalenbach
-
Lindsay Sprinter
-
Peter Ross
-
Rick Moen
-
Robert Moonen
-
Russell Coker
-
trentbuck@gmail.com