
On 20/02/2019 4:00 PM, Andrew Greig via luv-main wrote:
On 16/2/19 2:44 pm, Craig Sanders via luv-main wrote:
On Sat, Feb 16, 2019 at 01:02:44PM +1100, Andrew Greig wrote:
I have had some disks "ready to go" for a couple of months, meaning all that was required was to plug the SATA cables into the MB. I plugged them in today and booted the machine, except that it did not boot up. Ubuntu 18.04, it stopped at the Ubuntu burgundy screen and then went black and nowhere from that state.
I shut it down and removed the 2 SATA cables from the MB and booted up - successfully.
It is apparent that I lack understanding, hoping for enlightenment Is your /etc/fstab configured to mount the root fs (and any other filesystems) by device node (e.g. /dev/sda1), or by the UUID or LABEL?
If you're using device node names, then you've run into the well-known fact that linux does not guarantee that device names will remain the same across reboots. This is why you should always either use the filesystems' UUIDs or create labels on the filesystems and use those.
The device node may change because the hardware has changed - e.g. you've added or removed drive(s) from the systems (this is likely to be the case for your system). They may also change because the load order of driver modules has changed, or because of timing issues in exactly when a particular drive is detected by linux. They may also change after a kernel upgrade. Or they may change for no reason at all. They are explicitly not guaranteed to be consistent across reboots.
For over a decade now, the advice from linux kernel devs and pretty much everyone else has been:
DEVICE NODES CAN AND WILL CHANGE WITHOUT WARNING. NEVER USE THE DEVICE NODE IN /etc/fstab. ALWAYS USE UUID OR LABEL.
BTW, if you want to read up on what a UUID is, start here:
https://en.wikipedia.org/wiki/Universally_unique_identifier
Note: it's not uncommon for device node names to remain the same for months or years, even with drives being added to or removed from the system. That's nice, but it doesn't matter - think of it as a happy coincidence, certainly not as something that can be relied upon.
To fix, you'll need to boot a "Live" CD or USB stick (the gparted and clonezilla ISOs make good rescue systems), mount your system's root fs somewhere (e.g. as "/target"), and edit "/target/etc/fstab" so that it refers to all filesystems and swap partitions by UUID or LABEL.
If you don't have a live CD (and can't get one because you can't boot your system), you should be able to do the same from the initrd bash shell, or by adding "init=/bin/bash" to the kernel command line from the grub menu. You'd need to run "mount -o rw,remount /" to remount the root fs as RW before you can edit /etc/fstab. Any method which gets you your system's root fs mounted RW will work.
To find the UUID or LABEL for a filesystem, run "blkid". It will produce output like this:
# blkid /dev/sde1: LABEL="i_boot" UUID="69b22c56-2f10-45e8-ad0e-46a7c7dd1b43" TYPE="ext4" PARTUUID="1dbd3d85-01" /dev/sde2: LABEL="i_swap" UUID="a765866d-3444-48a1-a598-b8875d508c7d" TYPE="swap" PARTUUID="1dbd3d85-02" /dev/sde3: LABEL="i_root" UUID="198c2087-85bb-439c-9d97-012a87b95f0c" TYPE="ext4" PARTUUID="1dbd3d85-03"
If blkid isn't available, try 'lsblk -f'. Both blkid and lsblk will be on a system rescue disk, but may not be available from an initrd shell. If udev has already run, you can find symlinks linking the UUID to the device name in /dev/disk/by-uuid.
NOTE: UUIDs will *always* exist for a filesystem, they are created automatically when the fs is created. Labels will only exist if you've created them (the exact method varies according to the filesystem - e.g. for ext4, by using the "-L" option when you create a fs with mkfs.ext4, or by using "tune2fs" any time after the fs has been created).
Using the above as an example, if your fstab wanted to mount /dev/sde3 as /, change /dev/sde3 to UUID=198c2087-85bb-439c-9d97-012a87b95f0c - e.g.
UUID=198c2087-85bb-439c-9d97-012a87b95f0c / ext4 defaults,relatime,nodiratime 0 1
alternatively, if you've created labels for the filesystems, you could use something like:
LABEL=i_root / ext4 defaults,relatime,nodiratime 0 1
Do this for **ALL** filesystems and swap devices listed in /etc/fstab.
Save the edited fstab, run "sync", and then unmount the filesystem. You should then be able to boot into your system.
craig
-- craig sanders<cas@taz.net.au> _______________________________________________ luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main
Hi Craig,
I have been on this list since around 2004 but mainly as a user of Linux, not a sys admin of Linux.
My understanding is limited because I have found little which needed solving, until now.
I don't understand why /etc/fstab contains so little info - is this a ubuntu feature?
As for editing the fstab to include the UUIDs of the disks, having read the info on UUIDs, the link for which you provided, I believe that these are meant to be created in my machine. If so, at what stage and with what process?
I was not particularly convinced of the gparted outcome. I have 14Gb of RAW files on my drive, and 380Gb of photo image files in the cloud. So really I need to back up around 500Gb data to an external HDD and then I could to a re-install of everything.
Also this entry in dmesg has me wondering if I might get better performance if I changed the setting in the BIOS:
[ 63.344355] EDAC amd64: Node 0: DRAM ECC disabled. [ 63.344357] EDAC amd64: ECC disabled in the BIOS or no ECC capability, module will not load. Either enable ECC checking or force module loading by setting 'ecc_enable_override'. (Note that use of the override may cause unknown side effects.)
At the moment the machine is very slow to get up to speed, currently running 8Gb RAM/
Thanks
Andrew
_______________________________________________ luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main
Hi Andrew. The ECC warnings just mean that either your motherboard doesn't support ECC error correcting RAM, or that you don't have ECC RAM installed. Typically you will only find ECC support on server motherboards, not comsumer level motherboards. With Systemd, lots of things get started up in parallel, so you can find yourself at the desktop with a number of background services and programs still going through the motions of startup. Once they have done so, you may perceive a performance improvement, as things are now cached in memory, or have been swapped out of the way having done their tasks. Now back to the drives issue. Think of /etc/fstab as a list of filesystems that are mounted every time you fire up your system. When you connect a drive via USB adapter or the like, the drive will be dynamically mounted, but not be mentioned in /etc/fstab. As you have not yet formatted the new drives, there are no partitions to add to /etc/fstab yet. So at this stage, all you have in /etc/fstab are the drives you initially partitioned when you installed Ubuntu. By contrast, /etc/mtab holds the list of currently mounted devices. As Craig already pointed out, the command blkid will produce a list giving the UUID, LABEL (if it exists), partition type etc. Here's the blkid output of a Kubuntu machine I have: /dev/sda1: UUID="cb55d4b4-1c43-443d-ac17-612869e6350a" TYPE="ext4" PARTUUID="50e62e81-dd61-4808-916c-66d794f5b5c2" /dev/sda2: UUID="d1da8a46-5a43-4e70-bb36-dacb87afed41" TYPE="ext4" PARTUUID="41c57251-30be-4afc-bdec-89e2001fc026" /dev/sda3: UUID="efda79a0-4991-4531-913b-75715aecb98c" TYPE="swap" PARTUUID="4a235714-2ef9-4b13-9eb2-b7d58bac1613" /dev/sdb1: UUID="70D2-7F29" TYPE="vfat" PARTUUID="a6e905c0-7d01-4ef1-84c6-d0283475ab09" /dev/sdb2: UUID="292fad11-cea5-40c3-ae75-b69f06f6b089" TYPE="ext4" PARTUUID="27bdf27e-b0c6-4259-bb7d-55c8a28fb886" /dev/sdb3: UUID="dddeae68-dfdd-4e5f-9498-f4113beb43e4" TYPE="ext4" PARTUUID="909a3697-32d9-4921-b6b6-c8a7c6831abc" And here's the fstab file contents: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sdb3 during installation UUID=dddeae68-dfdd-4e5f-9498-f4113beb43e4 / ext4 errors=remount-ro 0 1 # /boot was on /dev/sdb2 during installation UUID=292fad11-cea5-40c3-ae75-b69f06f6b089 /boot ext4 defaults 0 2 # /boot/efi was on /dev/sdb1 during installation UUID=70D2-7F29 /boot/efi vfat umask=0077 0 1 # /home was on /dev/sda2 during installation UUID=d1da8a46-5a43-4e70-bb36-dacb87afed41 /home ext4 defaults 0 2 # /var was on /dev/sda1 during installation UUID=cb55d4b4-1c43-443d-ac17-612869e6350a /var ext4 defaults 0 2 # swap was on /dev/sda3 during installation UUID=efda79a0-4991-4531-913b-75715aecb98c none swap sw 0 0 I've highlighted the matching UUID in /etc/fstab and blkid outputs (As the PC in question uses UEFI there is a small vfat partition reserved for the UEFI data.) The device entries /dev/sda, /dev/sdb etc are set in the order that the kernel discovers the devices, either from the motherboard BIOS discovery order, or as a result of the order various kernel modules are loaded. By using the UUID device identifiers instead, the UUID data on the device partition table for each device is what is important, not the SATA slot into which they are plugged. So using the UUID form, I could happily remove the drives from one PC, plug them into an entirely different PC in a random order, and /etc/fstab will still match the correct drives and partitions with the correct mount points. Using the /dev/sda format comes with no such guarantee. /dev/sda is the first disk found. /dev/sda1 is the first partition of the first disk found. (I want this one mounted as /) /dev/sdb is the second disk found /dev/sdb1 is the first partition of the second disk found. (And this one mounted as /home) And the list goes on. But swap the SATA leads and suddenly you will have: /dev/sda is the first disk found. /dev/sda1 is the first partition of the first disk found. (The /home partition is here.) /dev/sdb is the second disk found /dev/sdb1 is the first partition of the second disk found. (This / partition is here.) Now /etc/fstab, being on what is now /dev/sdb1 will not be found on /dev/sda1 where it is expected to be. Interesting times will ensue. Using the UUID format, the boot loader looks for the device containing the UUID for the root partition and mounts it as / Then it does the same process for the /home partition. When creating a partition using Parted or other utility, you will have the option to set a LABEL as well. The LABEL can be used in place of a UUID, but you can't guarantee that fitting a second-hand drive with a pre-existing partition table will not find you looking at two drives with identical LABEL entries. UUID strings are typically system generated, and name clashes, while possible, are fairly rare. That's why the UUID method is the preferred option these days. Using any of the graphical front ends to parted will usually allow you to define the mount point you wish to use, and to mark the drive to mount automatically at boot. This has the side effect of plugging the information into /etc/fstab on your behalf. Hope this helps. Regards, Morrie.