
Hello Andrew, Perhaps configure them in /etc/fstab? ben -- bnis@fastmail.fm On Sat, Feb 16, 2019, at 1:02 PM, Andrew Greig via luv-main wrote:
Hi All,
I have had some disks "ready to go" for a couple of months, meaning all that was required was to plug the SATA cables into the MB. I plugged them in today and booted the machine, except that it did not boot up. Ubuntu 18.04, it stopped at the Ubuntu burgundy screen and then went black and nowhere from that state.
I shut it down and removed the 2 SATA cables from the MB and booted up - successfully.
It is apparent that I lack understanding, hoping for enlightenment
Gratefully
Andrew Greig
_______________________________________________ luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

Hi Andrew. Craig beat me to it, and in far better detail than I was typing. It's also going to be simpler to make the changes to UUID or LABEL format without the two additional drives unplugged, as it will give you your full-fat operating system and its tools to make the changes. The one extra step you might need to add to the end of Craig's list would be to force a rebuild of your bootloader configuration (probably Grub) so that the fstab UUID / LABEL changes get propagated into grub's config files. Regards, Morrie. On 16/02/2019 1:15 pm, Andrew Greig via luv-main wrote:
On 16/2/19 1:09 pm, Ben Nisenbaum via luv-main wrote:
Hello Andrew,
Perhaps configure them in /etc/fstab?
ben
Hi Ben,
I am not sure that the /etc/fstab file is even being addressed. Normally when I boot, an early event is the recognition of my SCSI scanner, but the process is not getting that far.
What if I made a "hot" connection (similar to plugging in a USB drive or an SD Card? Would the hardware detection device pick them up and write to the fstab? What if I connected them one at a time?
Thanks for your quick reply.
Andrew
_______________________________________________ luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

On Sat, Feb 16, 2019 at 03:00:08PM +1100, Morrie Wyatt wrote:
The one extra step you might need to add to the end of Craig's list would be to force a rebuild of your bootloader configuration (probably Grub) so that the fstab UUID / LABEL changes get propagated into grub's config files.
It certainly can't hurt to do that but it shouldn't be necessary. Grub uses UUIDs by default unless you tell it not to. There's a commented out option in /etc/default/grub on debian/ubuntu systems to disable use grub's of UUID: # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true There's almost no reason why anyone would need to uncomment that. It might, however, be worthwhile running 'grub-install' on ALL of the drives *after* the system has successfully booted with the new drives installed. That way the grub first stage boot loader will be available no matter which drive the BIOS tries to boot from (this is assuming that it's an old-style BIOS boot, rather than UEFI. UEFI is different, it loads grub directly from a smallish FAT-32 EFI partition). craig -- craig sanders <cas@taz.net.au>

On 16/2/19 1:02 pm, Andrew Greig via luv-main wrote:
Hi All,
I have had some disks "ready to go" for a couple of months, meaning all that was required was to plug the SATA cables into the MB. I plugged them in today and booted the machine, except that it did not boot up. Ubuntu 18.04, it stopped at the Ubuntu burgundy screen and then went black and nowhere from that state.
I shut it down and removed the 2 SATA cables from the MB and booted up - successfully.
Hi Andrew It may be as simple as a change to the boot order. From the bios set the 1st boot disk to the disk you have been booting from before you added the extra disks Cheers Nic
It is apparent that I lack understanding, hoping for enlightenment
Gratefully
Andrew Greig
_______________________________________________ luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

On Sat, Feb 16, 2019 at 01:02:44PM +1100, Andrew Greig wrote:
I have had some disks "ready to go" for a couple of months, meaning all that was required was to plug the SATA cables into the MB. I plugged them in today and booted the machine, except that it did not boot up. Ubuntu 18.04, it stopped at the Ubuntu burgundy screen and then went black and nowhere from that state.
I shut it down and removed the 2 SATA cables from the MB and booted up - successfully.
It is apparent that I lack understanding, hoping for enlightenment
Is your /etc/fstab configured to mount the root fs (and any other filesystems) by device node (e.g. /dev/sda1), or by the UUID or LABEL? If you're using device node names, then you've run into the well-known fact that linux does not guarantee that device names will remain the same across reboots. This is why you should always either use the filesystems' UUIDs or create labels on the filesystems and use those. The device node may change because the hardware has changed - e.g. you've added or removed drive(s) from the systems (this is likely to be the case for your system). They may also change because the load order of driver modules has changed, or because of timing issues in exactly when a particular drive is detected by linux. They may also change after a kernel upgrade. Or they may change for no reason at all. They are explicitly not guaranteed to be consistent across reboots. For over a decade now, the advice from linux kernel devs and pretty much everyone else has been: DEVICE NODES CAN AND WILL CHANGE WITHOUT WARNING. NEVER USE THE DEVICE NODE IN /etc/fstab. ALWAYS USE UUID OR LABEL. BTW, if you want to read up on what a UUID is, start here: https://en.wikipedia.org/wiki/Universally_unique_identifier Note: it's not uncommon for device node names to remain the same for months or years, even with drives being added to or removed from the system. That's nice, but it doesn't matter - think of it as a happy coincidence, certainly not as something that can be relied upon. To fix, you'll need to boot a "Live" CD or USB stick (the gparted and clonezilla ISOs make good rescue systems), mount your system's root fs somewhere (e.g. as "/target"), and edit "/target/etc/fstab" so that it refers to all filesystems and swap partitions by UUID or LABEL. If you don't have a live CD (and can't get one because you can't boot your system), you should be able to do the same from the initrd bash shell, or by adding "init=/bin/bash" to the kernel command line from the grub menu. You'd need to run "mount -o rw,remount /" to remount the root fs as RW before you can edit /etc/fstab. Any method which gets you your system's root fs mounted RW will work. To find the UUID or LABEL for a filesystem, run "blkid". It will produce output like this: # blkid /dev/sde1: LABEL="i_boot" UUID="69b22c56-2f10-45e8-ad0e-46a7c7dd1b43" TYPE="ext4" PARTUUID="1dbd3d85-01" /dev/sde2: LABEL="i_swap" UUID="a765866d-3444-48a1-a598-b8875d508c7d" TYPE="swap" PARTUUID="1dbd3d85-02" /dev/sde3: LABEL="i_root" UUID="198c2087-85bb-439c-9d97-012a87b95f0c" TYPE="ext4" PARTUUID="1dbd3d85-03" If blkid isn't available, try 'lsblk -f'. Both blkid and lsblk will be on a system rescue disk, but may not be available from an initrd shell. If udev has already run, you can find symlinks linking the UUID to the device name in /dev/disk/by-uuid. NOTE: UUIDs will *always* exist for a filesystem, they are created automatically when the fs is created. Labels will only exist if you've created them (the exact method varies according to the filesystem - e.g. for ext4, by using the "-L" option when you create a fs with mkfs.ext4, or by using "tune2fs" any time after the fs has been created). Using the above as an example, if your fstab wanted to mount /dev/sde3 as /, change /dev/sde3 to UUID=198c2087-85bb-439c-9d97-012a87b95f0c - e.g. UUID=198c2087-85bb-439c-9d97-012a87b95f0c / ext4 defaults,relatime,nodiratime 0 1 alternatively, if you've created labels for the filesystems, you could use something like: LABEL=i_root / ext4 defaults,relatime,nodiratime 0 1 Do this for **ALL** filesystems and swap devices listed in /etc/fstab. Save the edited fstab, run "sync", and then unmount the filesystem. You should then be able to boot into your system. craig -- craig sanders <cas@taz.net.au>

Firstly, can you please configure your thunderbird mail client to NOT send HTML mail? Or at least send both HTML and plain text? HTML mail really screws up the quoting, making it very hard to tell what's quoted and what's new. Also, don't bottom-post. Bottom posting is evil. And please trim your quotes to the bare minimum required to provide context for your response - no-one wants to read the same quoted messages over and over again just because you couldn't be bothered editing your messages properly. It tells the reader "I don't care about wasting YOUR time, as long as I save myself a few precious seconds". On Sun, Feb 17, 2019 at 02:08:13AM +1100, Andrew Greig via luv-main wrote:
This my /etc/fstab
andrew@andrew-desktop:~$ sudo cat /etc/fstab
You don't need sudo to read /etc/fstab, only to edit it. it's RW by root, RO by everyone else.
# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/ubuntu--vg-root / ext4 errors=remount-ro 0 1 /dev/mapper/ubuntu--vg-swap_1 none swap sw 0 0
andrew@andrew-desktop:~$ blkid /dev/sda1: UUID="sI0LJX-JSme-W2Yt-rFiZ-bQcV-lwFN-tSetH5" TYPE="LVM2_member" PARTUUID="92e664e1-01" /dev/mapper/ubuntu--vg-root: UUID="b0738928-9c7a-4127-9f79-99f61a77f515" TYPE="ext4"
If you're running LVM then you don't need to (and shouldn't, see below) use UUIDs to mount your filesystem. The device mapper entries provide the same kind of consistency and uniqueness as a LABEL. You shouldn't use UUIDs when mounting LVM volumes because any snapshots of that fs will have the same UUID unless you change the snapshot's UUID with something like 'tune2fs -U random' (ext4) or 'xfs_admin -U generate' (xfs).
after hot plugging the two drives (I chose to try this to see if they would be picked up and configured in the same way as a USB key is detected. it seems that sdb and sdc have been detected
dmesg gives this:
[ 279.911371] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 279.912343] ata5.00: ATA-9: ST2000DM006-2DM164, CC26, max UDMA/133 [ 279.912349] ata5.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA [ 279.913799] scsi 4:0:0:0: Direct-Access ATA ST2000DM006-2DM1 CC26 PQ: 0 ANSI: 5 ... [ 331.750805] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 331.751777] ata4.00: ATA-9: ST2000DM006-2DM164, CC26, max UDMA/133 [ 331.751784] ata4.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA [ 331.753212] scsi 3:0:0:0: Direct-Access ATA ST2000DM006-2DM1 CC26 PQ: 0 ANSI: 5
Since the drives have not been partitioned or formatted, should I just download the latest Ubuntu and install as a server, with the two drives taking up a RAID config?
Or could I just run gparted and partition and format those disks alone?
I don't see any reason why you'd want to re-install the OS just to add some drives. how you partition and format them depends on what you want to do with them. your two main options are to: 1. Add them as new physical volumes to your existing LVM volume group. This would allow you to expand any existing filesystems and/or create new logical volumes to format and mount (e.g. you could create a new lv, format it with xfs or ext4, and mount it as /media to store video & music files) 2. Partition, format, and mount as completely separate filesystem(s). e.g. if you just want somewhere to store video or music files. This could be done using any filesystem, with or without RAID (either via mdadm, or by creating a new LVM volume group, or even with btrfs or zfs) I'd guess that the only reason you're using LVM is because that was the default option when you first installed Ubuntu. It doesn't seem like you're familiar enough with it to have chosen it deliberately. IMO unless you know LVM well, you're generally better off with btrfs - like ZFS, it's a filesystem that has the features of software RAID and volume-management built in, and is much easier to use than dealing with mdadm + lvm2 + filesystem utilities separately. BTW, you may be tempted to use some variant of RAID-0 (linear append or striped) to combine the two 3TB drives into one 6TB filesystem. Don't do that unless you're willing to risk that a single drive failure will lose *everything* stored on that 6TB. RAID-0 is NOT safe to use for any data of any importance. The only reason to use it is if you need a large amount of fast storage for temporary files....and an SSD will be much faster than that anyway. (NOTE: data stored on striped raid-0 is effectively unrecoverable in case of a single drive failure. With linear append, recovery of most of the data stored on the non-failed drive is a PITA but possible. Striped gives a performance boost as reads and writes are spread across both drives so is roughly twice as fast as a single drive. Linear does not, as data is written first to one drive and then onto the second drive when the first fills up) So, either use RAID-1 (giving you a total of 3TB of usable space, with everything mirrored on both drives for redundancy/safety) or two separate filesystems of 3TB each.
I am puzzled by the almost empty fstab - when I was running OpenSuse the fstab was quite large.
it's not something to worry about. The size of /etc/fstab depends on how many filesystems and swap-devices needed to be auto-mounted at boot. your previous suse system was probably partitioned to have separate filesystems for /, /home, /usr, /var, /tmp and/or other common mountpoints. This was common practice back when drives were small (filesystems were often actually on separate drives, not just partitions), but is uncommon and not recommended these days. The hassles involved in having multiple small partitions (largely the risk of running out of space on one partition while still having plenty free on other partitions) tend to greatly outweigh the minor benefits. craig -- craig sanders <cas@taz.net.au>

On Wed, Feb 20, 2019 at 10:25:13PM +1100, Andrew Greig wrote:
I apologise for my carelessness. In the days when I needed frequent help (2000 - 2007) bottom posting was preferred, and so I defaulted to that position. It was not laziness, just a lack of awareness that I included too much of the thread. Most of my early days of assistance were fixed within one or two posts.
Bottom posting has NEVER been preferred. it has always been reviled, especially in tech forums. Top posting is worse in some ways (in that it screws up the chronological order of quites), but at least the reader doesn't have to scroll past hundreds of lines of repeated text. Edited quotes with interleaved replies is the only good way to do quoting.
That this has dragged on so long is a frustration for me. I made a mistake when I first loaded Ubuntu in that I did not have the other two drives available, then, by installing a user system instead of a server system I precluded setting up the two drives in RAID or btrfs.
I'm sure that your current situation is fixable, but it requires a fair bit of knowledge and experience about drives and partitions and filesystems. and the boot process. It also requires a detailed log of the boot process (which, as i mentioned i my last message, is hidden by the useless ubuntu boot logo. because branding is more important than technical info).
And this is what has led to believe that my easy way out of this is to do a clean install with all my drives connected and choose "server" and hopefully the bouncing ball will get me to a cheerful conclusion.
That may be the easiest solution. remember to backup your data first :) Also, as I said in my last message: 1. upgrade your RAM. 16GB minimum if you're running gimp and darktable and a browser and who knows what else. 2. consider getting an SSD (or a pair of them in RAID-1) for the boot/OS drive. The cheapest SSDs start at around $30 for 128GB these days and will be MUCH faster than any mechanical drive. 128GB is enough for the kernel and the root fs, use the new 3TB drives as /home. A 256GB Crucial MX500 is about $75, with performance of about 560 MB/s read and 510 MB/s write (approx 4 or 5 times faster than any mechanical drive). Having two drives in RAID-1 not only adds redundancy to the storage, it will generally double the read speed (but not the write). IMO if money is tight, having two 128GB drives in RAID-1 is better than one 256GB drive...but note that most 128GB SSDs are older, last-gen technology. If your motherboard has NVME slots, then it's worth paying the extra $45 for something like the 250GB Samsung 970 EVO PLUS (~ $120) - around 3500 MB/s read and 3300 MB/s write. About six times faster than a SATA3 SSD. I still recommend buying a pair so you can have RAID-1, which doubles the price to $240 (but note that the 500GB model is $169, or $338 for a pair, so is much better value for money). BTW, if your m/b doesn't have nvme slots you can get PCI-e cards that have 1, 2, or 4 nvme slots on them....but it's worth doing research before buying because some can boot off the nvme and some can't. here's something not too old as a starting point: https://forums.anandtech.com/threads/what-pcie-add-in-cards-can-boot-a-nvme-...
So I will read your response in the morning when I am fresh, and I am grateful for your continued assistance. I thought that by using a raid system or the btrfs then I may have had some security for my data, but maybe I should just use the now substantial amount of storage I have and just buy more cloud space when I need it.
RAID (in any form, including mdadm, lvm, btrfs or zfs) is good. It greatly reduces the number of times you NEED to restore from backup (and filesystem snapshots as provided by btrfs and zfs do too)....but remember that backups will still always be necessary. RAID IS NOT A SUBSTITUTE FOR BACKUP. (also, RAID-0 is not really RAID and provides NO redundancy. It actually increases your risk of catastrophic data loss). craig -- craig sanders <cas@taz.net.au>

Hi Craig, I tried to follow the UUID process and I think it worked OK. andrew@andrew-desktop:~$ blkid /dev/sdb1: UUID= andrew@andrew-desktop:~$ blkid /dev/sda1: UUID="sI0LJX-JSme-W2Yt-rFiZ-bQcV-lwFN-tSetH5" TYPE="LVM2_member" PARTUUID="92e664e1-01" /dev/mapper/ubuntu--vg-root: UUID="b0738928-9c7a-4127-9f79-99f61a77f515" TYPE="ext4" /dev/loop0: TYPE="squashfs" /dev/loop1: TYPE="squashfs" /dev/loop2: TYPE="squashfs" /dev/loop3: TYPE="squashfs" /dev/loop4: TYPE="squashfs" /dev/loop5: TYPE="squashfs" /dev/loop6: TYPE="squashfs" /dev/loop7: TYPE="squashfs" /dev/loop8: TYPE="squashfs" /dev/loop9: TYPE="squashfs" /dev/loop10: TYPE="squashfs" /dev/loop11: TYPE="squashfs" /dev/loop12: TYPE="squashfs" /dev/loop13: TYPE="squashfs" /dev/loop14: TYPE="squashfs" /dev/loop15: TYPE="squashfs" /dev/loop16: TYPE="squashfs" /dev/loop17: TYPE="squashfs" /dev/loop18: TYPE="squashfs" /dev/loop19: TYPE="squashfs" /dev/loop20: TYPE="squashfs" /dev/loop21: TYPE="squashfs" /dev/loop22: TYPE="squashfs" /dev/loop23: TYPE="squashfs" /dev/loop24: TYPE="squashfs" /dev/loop25: TYPE="squashfs" /dev/loop26: TYPE="squashfs" /dev/loop27: TYPE="squashfs" /dev/loop28: TYPE="squashfs" /dev/loop29: TYPE="squashfs" /dev/loop30: TYPE="squashfs" /dev/loop31: TYPE="squashfs" /dev/loop32: TYPE="squashfs" /dev/loop33: TYPE="squashfs" /dev/loop34: TYPE="squashfs" /dev/loop35: TYPE="squashfs" /dev/mapper/ubuntu--vg-swap_1: UUID="2f34e0cb-eb8f-498a-ada4-7e786b7b9f2b" TYPE="swap" /dev/sdb1: UUID="9HV3H6-JIYu-IdaS-2CGr-lkZQ-9xcB-RVu9Ks" TYPE="LVM2_member" PARTUUID="c3e8f29f-01" /dev/sdc1: UUID="mqbYsB-xpm2-7c11-RLN5-q47a-A0bB-wcefad" TYPE="LVM2_member" PARTUUID="7325946b-01" /dev/sdd1: LABEL="EOS_DIGITAL" UUID="130D-103C" TYPE="vfat" andrew@andrew-desktop:~$ I haven't deliberately formatted the new disks yet, so can I choose btrfs for the two new 2Tb disks? And what will I use to format them, and having done that will the fstab get written to automatically? Thanks Andrew On 20/2/19 10:05 pm, Craig Sanders via luv-main wrote:
# Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/ubuntu--vg-root / ext4 errors=remount-ro 0 1 /dev/mapper/ubuntu--vg-swap_1 none swap sw 0 0

On 20/02/2019 4:00 PM, Andrew Greig via luv-main wrote:
On 16/2/19 2:44 pm, Craig Sanders via luv-main wrote:
On Sat, Feb 16, 2019 at 01:02:44PM +1100, Andrew Greig wrote:
I have had some disks "ready to go" for a couple of months, meaning all that was required was to plug the SATA cables into the MB. I plugged them in today and booted the machine, except that it did not boot up. Ubuntu 18.04, it stopped at the Ubuntu burgundy screen and then went black and nowhere from that state.
I shut it down and removed the 2 SATA cables from the MB and booted up - successfully.
It is apparent that I lack understanding, hoping for enlightenment Is your /etc/fstab configured to mount the root fs (and any other filesystems) by device node (e.g. /dev/sda1), or by the UUID or LABEL?
If you're using device node names, then you've run into the well-known fact that linux does not guarantee that device names will remain the same across reboots. This is why you should always either use the filesystems' UUIDs or create labels on the filesystems and use those.
The device node may change because the hardware has changed - e.g. you've added or removed drive(s) from the systems (this is likely to be the case for your system). They may also change because the load order of driver modules has changed, or because of timing issues in exactly when a particular drive is detected by linux. They may also change after a kernel upgrade. Or they may change for no reason at all. They are explicitly not guaranteed to be consistent across reboots.
For over a decade now, the advice from linux kernel devs and pretty much everyone else has been:
DEVICE NODES CAN AND WILL CHANGE WITHOUT WARNING. NEVER USE THE DEVICE NODE IN /etc/fstab. ALWAYS USE UUID OR LABEL.
BTW, if you want to read up on what a UUID is, start here:
https://en.wikipedia.org/wiki/Universally_unique_identifier
Note: it's not uncommon for device node names to remain the same for months or years, even with drives being added to or removed from the system. That's nice, but it doesn't matter - think of it as a happy coincidence, certainly not as something that can be relied upon.
To fix, you'll need to boot a "Live" CD or USB stick (the gparted and clonezilla ISOs make good rescue systems), mount your system's root fs somewhere (e.g. as "/target"), and edit "/target/etc/fstab" so that it refers to all filesystems and swap partitions by UUID or LABEL.
If you don't have a live CD (and can't get one because you can't boot your system), you should be able to do the same from the initrd bash shell, or by adding "init=/bin/bash" to the kernel command line from the grub menu. You'd need to run "mount -o rw,remount /" to remount the root fs as RW before you can edit /etc/fstab. Any method which gets you your system's root fs mounted RW will work.
To find the UUID or LABEL for a filesystem, run "blkid". It will produce output like this:
# blkid /dev/sde1: LABEL="i_boot" UUID="69b22c56-2f10-45e8-ad0e-46a7c7dd1b43" TYPE="ext4" PARTUUID="1dbd3d85-01" /dev/sde2: LABEL="i_swap" UUID="a765866d-3444-48a1-a598-b8875d508c7d" TYPE="swap" PARTUUID="1dbd3d85-02" /dev/sde3: LABEL="i_root" UUID="198c2087-85bb-439c-9d97-012a87b95f0c" TYPE="ext4" PARTUUID="1dbd3d85-03"
If blkid isn't available, try 'lsblk -f'. Both blkid and lsblk will be on a system rescue disk, but may not be available from an initrd shell. If udev has already run, you can find symlinks linking the UUID to the device name in /dev/disk/by-uuid.
NOTE: UUIDs will *always* exist for a filesystem, they are created automatically when the fs is created. Labels will only exist if you've created them (the exact method varies according to the filesystem - e.g. for ext4, by using the "-L" option when you create a fs with mkfs.ext4, or by using "tune2fs" any time after the fs has been created).
Using the above as an example, if your fstab wanted to mount /dev/sde3 as /, change /dev/sde3 to UUID=198c2087-85bb-439c-9d97-012a87b95f0c - e.g.
UUID=198c2087-85bb-439c-9d97-012a87b95f0c / ext4 defaults,relatime,nodiratime 0 1
alternatively, if you've created labels for the filesystems, you could use something like:
LABEL=i_root / ext4 defaults,relatime,nodiratime 0 1
Do this for **ALL** filesystems and swap devices listed in /etc/fstab.
Save the edited fstab, run "sync", and then unmount the filesystem. You should then be able to boot into your system.
craig
-- craig sanders<cas@taz.net.au> _______________________________________________ luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main
Hi Craig,
I have been on this list since around 2004 but mainly as a user of Linux, not a sys admin of Linux.
My understanding is limited because I have found little which needed solving, until now.
I don't understand why /etc/fstab contains so little info - is this a ubuntu feature?
As for editing the fstab to include the UUIDs of the disks, having read the info on UUIDs, the link for which you provided, I believe that these are meant to be created in my machine. If so, at what stage and with what process?
I was not particularly convinced of the gparted outcome. I have 14Gb of RAW files on my drive, and 380Gb of photo image files in the cloud. So really I need to back up around 500Gb data to an external HDD and then I could to a re-install of everything.
Also this entry in dmesg has me wondering if I might get better performance if I changed the setting in the BIOS:
[ 63.344355] EDAC amd64: Node 0: DRAM ECC disabled. [ 63.344357] EDAC amd64: ECC disabled in the BIOS or no ECC capability, module will not load. Either enable ECC checking or force module loading by setting 'ecc_enable_override'. (Note that use of the override may cause unknown side effects.)
At the moment the machine is very slow to get up to speed, currently running 8Gb RAM/
Thanks
Andrew
_______________________________________________ luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main
Hi Andrew. The ECC warnings just mean that either your motherboard doesn't support ECC error correcting RAM, or that you don't have ECC RAM installed. Typically you will only find ECC support on server motherboards, not comsumer level motherboards. With Systemd, lots of things get started up in parallel, so you can find yourself at the desktop with a number of background services and programs still going through the motions of startup. Once they have done so, you may perceive a performance improvement, as things are now cached in memory, or have been swapped out of the way having done their tasks. Now back to the drives issue. Think of /etc/fstab as a list of filesystems that are mounted every time you fire up your system. When you connect a drive via USB adapter or the like, the drive will be dynamically mounted, but not be mentioned in /etc/fstab. As you have not yet formatted the new drives, there are no partitions to add to /etc/fstab yet. So at this stage, all you have in /etc/fstab are the drives you initially partitioned when you installed Ubuntu. By contrast, /etc/mtab holds the list of currently mounted devices. As Craig already pointed out, the command blkid will produce a list giving the UUID, LABEL (if it exists), partition type etc. Here's the blkid output of a Kubuntu machine I have: /dev/sda1: UUID="cb55d4b4-1c43-443d-ac17-612869e6350a" TYPE="ext4" PARTUUID="50e62e81-dd61-4808-916c-66d794f5b5c2" /dev/sda2: UUID="d1da8a46-5a43-4e70-bb36-dacb87afed41" TYPE="ext4" PARTUUID="41c57251-30be-4afc-bdec-89e2001fc026" /dev/sda3: UUID="efda79a0-4991-4531-913b-75715aecb98c" TYPE="swap" PARTUUID="4a235714-2ef9-4b13-9eb2-b7d58bac1613" /dev/sdb1: UUID="70D2-7F29" TYPE="vfat" PARTUUID="a6e905c0-7d01-4ef1-84c6-d0283475ab09" /dev/sdb2: UUID="292fad11-cea5-40c3-ae75-b69f06f6b089" TYPE="ext4" PARTUUID="27bdf27e-b0c6-4259-bb7d-55c8a28fb886" /dev/sdb3: UUID="dddeae68-dfdd-4e5f-9498-f4113beb43e4" TYPE="ext4" PARTUUID="909a3697-32d9-4921-b6b6-c8a7c6831abc" And here's the fstab file contents: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sdb3 during installation UUID=dddeae68-dfdd-4e5f-9498-f4113beb43e4 / ext4 errors=remount-ro 0 1 # /boot was on /dev/sdb2 during installation UUID=292fad11-cea5-40c3-ae75-b69f06f6b089 /boot ext4 defaults 0 2 # /boot/efi was on /dev/sdb1 during installation UUID=70D2-7F29 /boot/efi vfat umask=0077 0 1 # /home was on /dev/sda2 during installation UUID=d1da8a46-5a43-4e70-bb36-dacb87afed41 /home ext4 defaults 0 2 # /var was on /dev/sda1 during installation UUID=cb55d4b4-1c43-443d-ac17-612869e6350a /var ext4 defaults 0 2 # swap was on /dev/sda3 during installation UUID=efda79a0-4991-4531-913b-75715aecb98c none swap sw 0 0 I've highlighted the matching UUID in /etc/fstab and blkid outputs (As the PC in question uses UEFI there is a small vfat partition reserved for the UEFI data.) The device entries /dev/sda, /dev/sdb etc are set in the order that the kernel discovers the devices, either from the motherboard BIOS discovery order, or as a result of the order various kernel modules are loaded. By using the UUID device identifiers instead, the UUID data on the device partition table for each device is what is important, not the SATA slot into which they are plugged. So using the UUID form, I could happily remove the drives from one PC, plug them into an entirely different PC in a random order, and /etc/fstab will still match the correct drives and partitions with the correct mount points. Using the /dev/sda format comes with no such guarantee. /dev/sda is the first disk found. /dev/sda1 is the first partition of the first disk found. (I want this one mounted as /) /dev/sdb is the second disk found /dev/sdb1 is the first partition of the second disk found. (And this one mounted as /home) And the list goes on. But swap the SATA leads and suddenly you will have: /dev/sda is the first disk found. /dev/sda1 is the first partition of the first disk found. (The /home partition is here.) /dev/sdb is the second disk found /dev/sdb1 is the first partition of the second disk found. (This / partition is here.) Now /etc/fstab, being on what is now /dev/sdb1 will not be found on /dev/sda1 where it is expected to be. Interesting times will ensue. Using the UUID format, the boot loader looks for the device containing the UUID for the root partition and mounts it as / Then it does the same process for the /home partition. When creating a partition using Parted or other utility, you will have the option to set a LABEL as well. The LABEL can be used in place of a UUID, but you can't guarantee that fitting a second-hand drive with a pre-existing partition table will not find you looking at two drives with identical LABEL entries. UUID strings are typically system generated, and name clashes, while possible, are fairly rare. That's why the UUID method is the preferred option these days. Using any of the graphical front ends to parted will usually allow you to define the mount point you wish to use, and to mark the drive to mount automatically at boot. This has the side effect of plugging the information into /etc/fstab on your behalf. Hope this helps. Regards, Morrie.

On Wed, Feb 20, 2019 at 08:18:59PM +1100, Morrie Wyatt via luv-main wrote:
The ECC warnings just mean that either your motherboard doesn't support ECC error correcting RAM, or that you don't have ECC RAM installed.
AFAIK, you see it when the motherboard supports ECC RAM but you only have non-ECC RAM installed - the kernel doesn't even try to load the ECC module unless it detects that the hardware is capable of ECC. I see this warning all the time on my machines (all with AMD CPUs - currently a Phenom II 1090T, an FX-8150 and an FX-8320, and a Threadripper 1950x). As you say, it's not something to worry about unless, of course, you KNOW you paid extra for ECC RAM and it SHOULD be detected :) ECC RAM typically costs at least 30% more than non-ECC RAM and it's typically not available in stock in most whitebox computer shops, it's a special request you have to go out of your way to ask for or find - so, unless you've re-purposed an old server machine, it's not likely to be something that someone has and doesn't know about.
Typically you will only find ECC support on server motherboards, not comsumer level motherboards.
Most AMD motherboards supporting Phenom II, FX or newer CPUs support both ECC and non-ECC RAM. i.e. since at least 2008 or so. Intel motherboards and CPUs typically don't support ECC unless you've bought a "server" motherboard and CPU. Intel likes to engage in artificial market segmentation to prevent customers from using cheaper CPUs and motherboards for what they consider to be high-end server tasks. because near-monopoly allows them to get away with shit like that. craig -- craig sanders <cas@taz.net.au>

On Wed, Feb 20, 2019 at 09:33:21PM +1100, Andrew Greig wrote:
I have peace of mind about the ECC or not issue. I have a machine which boots slowly compared with even ten years ago. One needs to boot it up and then log in, and go and make a cup of coffee have a chat with a friend over the phone, and then it may be ready to perform,
What are you running on that machine? and how much RAM does it have? And does the motherboard have any RAM slots free? if so, upgrading RAM is still the single best and cheapest way to improve performance on most machines. IIRC your machine is fairly old, so it takes DDR-3 RAM rather than the newer DDR-4. A 4GB DDR-3 stick is about $36 these days. An 8GB DDR-3 stick is about $65. You should upgrade RAM in pairs, so either 2x4GB or 2x8GB. Are you running systemd? if so, have you tried running 'systemd-analyse blame' to see where the boot delays are ocurring? Are you running something that scans the entire drive on every boot? something like the ancient gnome beagle or kde's nepomuk or baloo?
but it still takes a minute to load the first web page over a high speed cable connection.
Are you running firefox or chromium? both of them are RAM hogs, but chromium is much worse - it uses several times more RAM than firefox to display the same or fewer number of tabs. does your internet connection gets started automatically on boot, or only on demand when something (like your web browser) tries to connect to somewhere on the internet? are you running a local DNS cache? or a web proxy cache? A local DNS caching resolver is **definitely** worth having. A web proxy, may be worth having if you vist the same sites repeatedly or if there are any other computers on the network visiting the same sites.
20 seconds to open the file manager, about the same to open Thunderbird. Boot times are becoming a bit like Microsoft 's BSOD used to be, an unfortunate fact of life.
that sounds like a combination of insufficient RAM, and slow swapping.
I am still unsure how to use gparted to get the disks recognised by the system. I can hot plug them and the system will not crash, but if I try to boot with them connected it will fail to boot.
Try it again without the "quiet splash" options in the grub boot entry. Ubuntu adds these annoying options to hide the nasty horrible text that shows what is happening when the machine boots and replace it with a pretty but completely useless and uninformative graphical logo. Yay. Most of the time, you don't need to see the kernel boot up messages...but when you DO need them, there is no subsitute for them. IMO, it's criminal negligence to hide them away as if they're some dirty little secret rather than vital diagnostic information. Without this information, it's very hard to figure out what the problem is. Anyway, ignoring my rant, instead of hitting enter or waiting for grub to time out, hit "e" to edit the grub entry. look for the line with "quiet splash" on it and remove those two options. Hit F10 or Ctrl-X to boot. This change is not permanent, it only affects the current boot. Alternatively, choosing the "recovery mode" option from the grub menu may give you the same result. It should also give you a password prompt to get a root shell which you can use to investigate and fix the problem (you will need to run "mount -o remount,rw /" to be able to edit the root fs). Another alternative: 0. make a backup copy of your grub default file. e.g.: sudo cp -a /etc/default/grub /etc/default/grub.20190220 1. sudo vi /etc/default/grub (or use nano or whatever your favourite editor is) 2. remove "quiet" and "splash" from wherever they occur (either or both of GRUB_CMDLINE_LINUX_DEFAULT and GRUB_CMDLINE_LINUX) 3. Uncomment the "#GRUB_TERMINAL=console" line by removing the # at the start. 4. save and exit 5. sudo update-grub This will get rid of the "quiet splash" options permanently.
My /etc/fstab file has little information to copy, which is why I feel that a new build may be the best way forward for me.
If you do that, then IMO you should seriously consider the following: 1. Upgrade the RAM in your machine to the maximum it will take. 16GB or more. In fact, you should do this anyway even if you don't rebuild the system. 2. Buying a smallish (128 to 256 GB) SSD for the boot drive and swap space. Optionally buy a second identical one so you can have RAID-1. Use the two 3TB drives in RAID-1 (with mdadm or btrfs) for bulk storage (the old 1TB drive is ancient and should be probably be retired. or reformatted and used only for temporary scratch space after you've copied your old files from it) If you're going to upgrade your RAM, it may also be worth upgrading the motherboard and CPU to something that can take DDR-4 RAM (a 16GB kit of 2x8GB DDR-4 starts from around $160, and because DDR-4 is readily available in much larger sizes than DDR-3 is easily upgraded all the way to 64GB or more). A new CPU will be faster than what you currently have and will have lots of potential for future upgrades. e.g. a Ryzen 5 2600X (~ $330) or Ryzen 7 2700X (~ $515) CPU, and a motherboard to match (X370 motherboards start from around $130, X470 from around $200). Doing this would cost at least $500 or more on top the new/extra RAM - but buying DDR-3 RAM is kind of throwing money away on obsolete technology that is on the verge of disappearing from the market entirely, while DDR-4 will still be in active use for at least another 5 or 10 years. (the cost of replacing the DDR-3 RAM in most of my machines with DDR-4 is the main reason I haven't upgraded all my Phenom and FX CPUs to Ryzen....if i had to upgrade the RAM anyway, I'd upgrade them to Ryzen at the same time). Modern motherboards also have NVME slots for extremely fast SSDs. SATA SSDs max out at around 550 MB/s, limited by the SATA bus. NVME SSDs run at 4x PCI-e speed and can get up to around 3200 MB/s. They cost about the same as SATA SSDs.
I have 65Gb of space left on my 1TB drive and with several photo shoots on the books for the next two weeks it will fill and grind to a halt, so I need to apply myself to get the outcome.
Most filesytems slow down when they get over about 90% full, but what you're reporting seems excessive even for that. It's certainly contributing to the performance problems. Maybe try moving a couple of hundred GB to another drive. One of your new 3TB drives will do. Or use 1 or 2 USB flash drive (128G USB flash drives are under $50 these days. 256GB are under $90). once you've freed up some space, you'll probably want to defrag the filesystem. here's some useful info about defragging ext4: https://askubuntu.com/questions/221079/how-to-defrag-an-ext4-filesystem defragging ext4 usually isn't necessary, but once it gets that close to full, it WILL be horribly fragmented. craig -- craig sanders <cas@taz.net.au>

Looking at the disks in gparted I have: /dev/sda1 File system lvn2 pv Label UUID sI0LJX-JSme-W2Yt-rFiZ-bQcV-lwFN-tSetH5 Volume Group ubuntu-vg Members /dev/sda1 /dev/sdb1 Partition /dev/sda1 Name Flags boot/lvm /dev/sdb1 File system lvm2 pv Label UUID 9HV3H6-JIYu-IdaS-2CGr-lkZQ-9xcB-RVu9Ks Status Active Volume group /dev/sda1 /dev/sdb1 Logical Volumes root swap-1 Partition Path /dev/sdb1 Name Flags lvm /dev/sdc1 File system lvm2 pv Label UUID mqbYsB-xpm2-7c11-RLN5-q47a-A0bB-wcefad Status Not active(not a member of any volume group)Volume Group Members Logical Volumes Partition Path /dev/sdc1 Name Flags lvm My current fstab is this andrew@andrew-desktop:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/ubuntu--vg-root / ext4 errors=remount-ro 0 1 /dev/mapper/ubuntu--vg-swap_1 none swap sw 0 0 andrew@andrew-desktop:~$ So /dev/sdb1 is part of a lvm group but /dev/sdc1 is not What command do I use to get these added to the fstab? I haven't consciously formatted either of the two new drives,is there a step I have missed? I haven't got the dollars for a M/B upgrade so I will purchase some more DDR3 Ram to get me to the limit of the motherboard, and I will purchase a SDD as recommended. It wouldf be nice to get thses disks running so that I can dump my data on to them and then add the SDD and do a fresh install using btrfs, which, I believe will give me an effective RAID 1 config. Many thanks Andrew

On Thu, Feb 21, 2019 at 11:14:13PM +1100, Andrew Greig wrote:
Looking at the disks in gparted I have:
/dev/sda1 File system lvn2 pv Label UUID sI0LJX-JSme-W2Yt-rFiZ-bQcV-lwFN-tSetH5 Volume Group ubuntu-vg Members /dev/sda1 /dev/sdb1 Partition /dev/sda1 Name Flags boot/lvm
/dev/sdb1 File system lvm2 pv Label UUID 9HV3H6-JIYu-IdaS-2CGr-lkZQ-9xcB-RVu9Ks Status Active Volume group /dev/sda1 /dev/sdb1 Logical Volumes root swap-1 Partition Path /dev/sdb1 Name Flags lvm
/dev/sdc1 File system lvm2 pv Label UUID mqbYsB-xpm2-7c11-RLN5-q47a-A0bB-wcefad Status Not active(not a member of any volume group)Volume Group Members Logical Volumes Partition Path /dev/sdc1 Name Flags lvm
It looks like you've added one of the two new 3TB drives to the same volume group as your root fs and swap partition. The other 3TB drive has been turned into an unrelated volume group. Why? Which drive is the old 1TB drive? and which are the new 3TB drives? My *guess* is that sdb1 is the old 1TB drive (because that's the only one where the root and swap-1 LVs are mentioned). If that's the case, then I'll also guess that the 1TB drive is plugged into the second SATA port....so when you plugged the new drives in, you plugged one of them into the first SATA port. Try swapping the cables for those two drives around so that the 1TB drive is in the first port. try running 'fdisk -l'. That will show each disk and all partitions on it, including the brand, model, and size of the drive. knowing the logical identifiers is only half the story, you also need to know which physical drive corresponds to those identifiers. Once you have this information, i strongly recommend writing it down or printing it so you always have it available when planning what to do.
My current fstab is this andrew@andrew-desktop:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/ubuntu--vg-root / ext4 errors=remount-ro 0 1 /dev/mapper/ubuntu--vg-swap_1 none swap sw 0 0 andrew@andrew-desktop:~$
So /dev/sdb1 is part of a lvm group but /dev/sdc1 is not
What command do I use to get these added to the fstab? I haven't consciously formatted either of the two new drives,is there a step I have missed?
dunno, there isn't enough info to safely give any direct instructions. the best I can give is generic advice that you'll have to adapt to your hardware and circumstances. But the first thing you need to do is undo the existing mess - why did you add one of the new drives to the existing volume group (VG)? and, since you added the new drive, why didn't you just create a new logical volume (LV), format it, and start using it? You'll need to check that it isn't being actively used in the VG, and then remove that drive from the VG before you do anything else.
I haven't got the dollars for a M/B upgrade so I will purchase some more DDR3 Ram to get me to the limit of the motherboard, and I will purchase a SDD as recommended. It wouldf be nice to get thses disks running so that I can dump my data on to them and then add the SDD and do a fresh install using btrfs, which, I believe will give me an effective RAID 1 config.
The SSD or SSDs should be used for grub, the root fs /, the EFI partition (if any), /boot (if it's a separate partition and not just part of /), and swap space. the 3TB drives are for your home directory and data. You don't want to mix the SSD(s) and the hard drives into the same btrfs array. You can, however, have two btrfs arrays: one for the boot+OS SSD(s), the other for your bulk data (the 3TB drives). If all your data is going to be under your home directory then mount the latter as /home. If you're going to use it for other stuff too, mount it as /data or something and symlink into it (e.g. while booted in recovery mode, or logged in as root with nothing running as your non-root user: "mv /home /data/; ln -sf /data/home/ /") BTW, if you only get one SSD but plan to get another one later, btrfs allows you to convert it to RAID-1 at any time. So does ZFS, you can always add a mirror to a single drive. To do the same with mdadm, you have to plan ahead and create an mdadm degraded raid-1 array (i.e. with a missing drive) when you partition and format the drive. Probably the easiest way to do this is to remove ALL drives from the system, install the SSD(s) into the first (and second) SATA ports on the motherboard, and the two 3TB drives into the third and fourth SATA ports. Examine the motherboard carefully and check the m/b's manual when choosing which port to plug each drive into - the first port will probably be labelled SATA_0 or similar. boot up with the installer USB or DVD and tell it to format the SSD(s) as the root fs with btrfs, and the two 3TB drives with btrfs (to be mounted as /home or /data as mentioned above). MAKE SURE YOU DELETE ANY EXISTING PARTITION TABLES AND CREATE NEW EMPTY PARTITION TABLES ON ALL DRIVES. The partition tables on the SSDs should be identical with each other (you'll need a small FAT-32 partition for EFI, a swap partition - 4GB should be enough, and the remainder of the disk as a partition for the root fs); and the partition tables on the 3TB drives should be identical with each other (you probably only need one big partition on these). When the system is installed and boots up successfully, power down, plug in the old 1TB drive, reboot, mount it somewhere convenient (e.g. mkdir /old, and mount the old root fs as /old), and then copy your data from it. If you're going to copy your entire home directory (i.e. to keep your old config files as well as your data) from the old drive to the new 3TB btrfs array then you should do it while logged in as root with no processes running as your non-root user. IIRC, the ubuntu installer doesn't normally prompt you to create a password for root so you'll need to do that yourself (e.g. by running "su" or "sudo -i" and then running "passwd root"). Don't log in as root with X, switch to the virtual terminal with Ctrl-Alt-F1 and login on the text console. Once you've copied the data from it, you should probably retire that old 1TB drive. Unplug it and put it away somewhere safe. Write the date on it. It's effectively a backup of your data as at that date. Speaking of backups, you should backup your data regularly. Get a USB drive box and another drive at least 3TB in size (e.g. a 4TB drive with a 1TB partition and a 3TB partition will allow you to have multiple backups of / and and at least one backup of /home). Use btrfs snapshots and 'btfs send' to backup to the USB drive. IMO you're better off getting a generic USB drive box that allows you to easily swap the drive in it than to get an "external" drive. That will allow you to have multiple backup drives, so you can store one off site (also, "external drive" products sold as a single self-contained unit tend to have proprietary garbage firmware that lies to the OS and holds your data to ransom - e.g. if external drive box dies you can't just pull out the drive and put it into another drive box and use it). craig -- craig sanders <cas@taz.net.au>

Hi Craig, I am unsure how much to clip as your response is comprehensive. But to start /dev/sda1 is my 1 TB drive and it is showing as having boot and lvm, do not know how root and swap were assigned to sdb1 The extra drives are 2 TB drives The problem with /dev/sdc1 not being part of the group is beyond me. I must have done something in gparted to get sdb1 recognised as part of the group, and not been able to do the same with /dev/sdc1 The fact that /dev/sdb1 is showing as active suggests it may have been formatted but I do not know how, On 22/2/19 12:35 pm, Craig Sanders via luv-main wrote:
On Thu, Feb 21, 2019 at 11:14:13PM +1100, Andrew Greig wrote:
Looking at the disks in gparted I have:
/dev/sda1 File system lvn2 pv Label UUID sI0LJX-JSme-W2Yt-rFiZ-bQcV-lwFN-tSetH5 Volume Group ubuntu-vg Members /dev/sda1 /dev/sdb1 Partition /dev/sda1 Name Flags boot/lvm
/dev/sdb1 File system lvm2 pv Label UUID 9HV3H6-JIYu-IdaS-2CGr-lkZQ-9xcB-RVu9Ks Status Active Volume group /dev/sda1 /dev/sdb1 Logical Volumes root swap-1 Partition Path /dev/sdb1 Name Flags lvm
/dev/sdc1 File system lvm2 pv Label UUID mqbYsB-xpm2-7c11-RLN5-q47a-A0bB-wcefad Status Not active(not a member of any volume group)Volume Group Members Logical Volumes Partition Path /dev/sdc1 Name Flags lvm
It looks like you've added one of the two new 3TB drives to the same volume group as your root fs and swap partition. The other 3TB drive has been turned into an unrelated volume group. Why? No idea
Which drive is the old 1TB drive? and which are the new 3TB drives?
/dev/sda1 is the old drive and /dev/sdb1 and /dev/sdc1 are the new drives
My *guess* is that sdb1 is the old 1TB drive (because that's the only one where the root and swap-1 LVs are mentioned). If that's the case,
then I'll
also guess that the 1TB drive is plugged into the second SATA port....so when you plugged the new drives in, you plugged one of them into the first SATA port. Try swapping the cables for those two drives around so that the 1TB drive is in the first port.
try running 'fdisk -l'. That will show each disk and all partitions on it, including the brand, model, and size of the drive. knowing the logical identifiers is only half the story, you also need to know which physical drive corresponds to those identifiers. andrew@andrew-desktop:~$ sudo fdisk -l Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 1953523711 1953521664 931.5G 8e Linux LVM
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0xc3e8f29f Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 3907028991 3907026944 1.8T 8e Linux LVM Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x7325946b Device Boot Start End Sectors Size Id Type /dev/sdc1 2048 3907028991 3907026944 1.8T 8e Linux LVM
Once you have this information, i strongly recommend writing it down or printing it so you always have it available when planning what to do.
My current fstab is this andrew@andrew-desktop:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name
devices
# that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/ubuntu--vg-root / ext4 errors=remount-ro 0 1 /dev/mapper/ubuntu--vg-swap_1 none swap sw 0 0 andrew@andrew-desktop:~$
So /dev/sdb1 is part of a lvm group but /dev/sdc1 is not
What command do I use to get these added to the fstab? I haven't consciously formatted either of the two new drives,is there a step I have missed?
dunno, there isn't enough info to safely give any direct instructions. the best I can give is generic advice that you'll have to adapt to your hardware and circumstances.
But the first thing you need to do is undo the existing mess - why did you add one of the new drives to the existing volume group (VG)? and, since you added the new drive, why didn't you just create a new logical volume (LV), format it, and start using it? This is my problem, my understanding of lvm is minimal, it allows the partitions to grow or shrink to best use the disk space. That is it.
lvm was set up as a default in the Ubuntu install
You'll need to check that it isn't being actively used in the VG, and
then
remove that drive from the VG before you do anything else.
I am heading down to MSY soon to get a new SSD they do not have a 500Gb Crucial MX500 (out of stock) they do have a Crucial 1TB SSD however. I used to use a hotswap box in the days of IDE ribbon cables, is that the sort of thing you are suggesting I use for the old 1TB sata drive Thanks Andrew

Hi All, I have purchased a new 1Tb SSD and I have two unused SATA 2Tb drives, and currently 8Gb RAM (max capacity 32Gb DDR3 1866) I will settle for 24Gb soon. I have two optical drives, I will settle for one. MB = ASRock 890 GM Pro3 5 sata slots Currently the optical drive is in slot one So I will order my installation for the SSD in slot one and the optical in slot 5 slots 2 and 3 are for the 2 x SATA 2Tb drives, slot 4 will be for my old and now full Sata 1 Tb drive.l I will back-up my Thunderbird profile(s) to a USB stick for easy restore when the new system is running. Question - Should I choose Ubuntu 18.04 LTS or install 18.10 which will need an upgrade at the end of July? Question: To set up my SSD for root and the other two drives as RAID 1 mounted as /home, is it simply a matter of choosing btrfs? I haven't built a Ubuntu Server before. Any "gotcha's"? Andrew Greig and thanks very much to Craig Sanders for his consistent support. I think the "scorched earth" approach may fix all of my earlier confusion. If I get stuck in the install I will write from my notebook.

Hello Andrew, On 2/22/19, Andrew Greig via luv-main <luv-main@luv.asn.au> wrote:
Hi All,
I have purchased a new 1Tb SSD and I have two unused SATA 2Tb drives, and currently 8Gb RAM (max capacity 32Gb DDR3 1866) I will settle for 24Gb soon.
I have two optical drives, I will settle for one.
MB = ASRock 890 GM Pro3 5 sata slots
Currently the optical drive is in slot one
So I will order my installation for the SSD in slot one and the optical in slot 5 slots 2 and 3 are for the 2 x SATA 2Tb drives, slot 4 will be for my old and now full Sata 1 Tb drive.l
I will back-up my Thunderbird profile(s) to a USB stick for easy restore when the new system is running.
Copy/back up all your home directory structure to something big enough, preferably more than one copy. Consider a USB hard drive, USB memory sticks are good, but my experience is that they can fail unexpectedly. They make a reasonable short term storage, and for not critical material. Some last better. The other thing is to burn copies to optical media, carefully done, choosing the right disks, that can be an archival storage.
Question - Should I choose Ubuntu 18.04 LTS or install 18.10 which will need an upgrade at the end of July?
Question: To set up my SSD for root and the other two drives as RAID 1 mounted as /home, is it simply a matter of choosing btrfs? I haven't built a Ubuntu Server before.
I have used Ubuntu, quite a while back. These days, Debian, with an interest in the derivatives that go away from systemd. The Debian installer has made big strides of progress, and very effective. Whichever way you go, before you restore any of your backed up data, look at how it went and be prepared to redo as a learning experience if it makes a mess. Minor problems are a different challenge, learning to rectify helps with other skills for when you are up and running.
Any "gotcha's"?
From your discussion, learn and understand what options the BIOS/UEFI offers, and the implications as to which drives it will look at for booting. Understand that a drive, any drive is a bit bucket. It needs the partitions as a logical data structure for dividing it up, although that relaxes with BTRFS. Then there are the higher level data structures in the partition that allocate particular addresses to particular files. You do not need a detail understanding such that you can look at the raw data and understand, but enough of the logical framing that the bits fit together in your mind.
A basic Debian install is very flexible, it can do server and/or desktop, depending on what you add on top. I am certain Craig and Russell can comment on exactly the sane choices to make, with small detail variations of preference between them. The differences can look bigger than they are, do some research and read up about what the various filesystems offer. Craig has made some very pertinent remarks about the implications.
Andrew Greig and thanks very much to Craig Sanders for his consistent support. I think the "scorched earth" approach may fix all of my earlier confusion. If I get stuck in the install I will write from my notebook.
Regards, Mark Trickett

Question - Should I choose Ubuntu 18.04 LTS or install 18.10 which will need an upgrade at the end of July?
Question: To set up my SSD for root and the other two drives as RAID 1 mounted as /home, is it simply a matter of choosing btrfs? I haven't built a Ubuntu Server before.
Hi Andrew, I have been using Kubuntu (not Ubuntu) 18.04 LTS every day, almost since it was released, and haven't had any issues. Running it on bare metal and inside VirtualBox on the same server. Been updating periodically and that has gone smoothly as well. Previously used 12.04LTS and 14.04LTS for many years. I always go for the LTS versions - fewer issues for more stability, and everything I need is available. HTH Andrew

If you have a RAID-1 of 2TB disks a single 1TB disk doesn't provide much value. I suggest using the port for a second SSD instead and have a RAID-1 on SSD for root and /home and 2*2TB RAID-1 for everything else. If a 2TB RAID-1 isn't enough for your big files then consider getting a couple of 6TB disks, they are cheap nowadays. -- Sent from my Huawei Mate 9 with K-9 Mail.

Hi Russell, The 1Tb is an SSD for speed and I have another 2 x 2Tb drives for my data. After 3 years of photography and 13,000 images in raw, proofs and full size jpgs I have around 500Gb of data. This should meet my needs for 2 years at least at which time I will build a bigger machine. I am in the partitioner at present, manual chosen, I want root on the SSD LVM VG ubuntu-vg LV root - 2.0TB Linux device-mapper (linear) is what I am presented with so do I need to change root to home? LVM VG ubuntu-vg, LV swap_1 - 1.0 GB Linux device-mapper (linear) SCS13 (0,0,0) (sda) - 1.0 TB ATA Samsung SSD 860 SCS15 (0,0,0)(sdb) - 2.0 TB ATA ST2000DM006-2DM1 #1 primary 2.0 TB K lvm SCS16 (0,0,0) (sdc) - 2.0 TB ATA ST2000DM006-2DM1 #1 primary 2.0 TB K lvm So how do I partition this so that root and boot are on the 1.0TB SSD and so that /home is the RAID array of two disks of 2TB each? I am in Guided Partitioning at present, next step is Configure Sotware RAID Then Configure the Logical Volume Manager Then configure encrypted volumes Then configure iSCSI volumes I wouls appreciate some advice as I am in pretty deep, my eyes are above the water but I need to take a breath soon. Gratefully Andrew On 22/2/19 7:04 pm, Russell Coker wrote:
If you have a RAID-1 of 2TB disks a single 1TB disk doesn't provide much value. I suggest using the port for a second SSD instead and have a RAID-1 on SSD for root and /home and 2*2TB RAID-1 for everything else.
If a 2TB RAID-1 isn't enough for your big files then consider getting a couple of 6TB disks, they are cheap nowadays.

On Fri, Feb 22, 2019 at 08:20:48PM +1100, Andrew Greig wrote:
The 1Tb is an SSD for speed and I have another 2 x 2Tb drives for my data. After 3 years of photography and 13,000 images in raw, proofs and full size jpgs I have around 500Gb of data. This should meet my needs for 2 years at least at which time I will build a bigger machine.
I am in the partitioner at present, manual chosen,
I want root on the SSD
LVM VG ubuntu-vg LV root - 2.0TB Linux device-mapper (linear) is what I am presented with
so do I need to change root to home?
LVM VG ubuntu-vg, LV swap_1 - 1.0 GB Linux device-mapper (linear)
You don't need LVM if you're using btrfs, it doesn't give you anything that btrfs doesn't - it'll just make your disk management more complicated. Delete the partition tables from all 3 drives and create them manually. 1. sda (1 TB SSD) You'll need a partition for EFI (optional), a swap partition and a btrfs partition for the root fs. 4 or 8GB should be plenty for swap. the btrfs partition should be the remainder of the disk. If you're motherboard is old-style BIOS rather than UEFI, you don't need a FAT32 partition. sda (if BIOS): 4-8GB swap remainder for btrfs root fs sda (if UEFI or if you think you might move this disk to a UEFI machine in future): 512 MB EFI partition 4-8GB swap remainder for btrfs root fs Setting this up with btrfs now gives you the option of easily converting it to raid-1 later. just add an identical drive, partition it exactly the same, and tell btrfs to add the new partition to the existing one. btw, because the second drive has identical partitioning, you'll have another free partition the same size as your swap partition. you can use that for more swap, or format it and use it for /tmp or something. i'd just add it as more swap. Using btrfs for the root fs also allows you to use btrfs snapshots, and btrfs send for backups. 2. both sdb and sdc (2 x 2TB HDD): 1 big partition for btrfs /data. the installer should ask you where you want to mount this (/data) when you set it up.
So how do I partition this so that root and boot are on the 1.0TB SSD
You don't really need a separate partition for /boot, it works just fine as a subdirectory of /. Old-timers like me only do that out of habit from the days when it was useful to do so.
and so that /home is the RAID array of two disks of 2TB each?
I'd leave /home on the SSD - it's fast, and it's much bigger than you need for the OS. Having all your config files and browser cache and the data you're currently working with on the SSD will be a huge performance boost. Use /home on your SSD as fast working space (editing your images and videos on the SSD will be MUCH faster than editing them on the HDD), and move the completed work to subdirectories under /data - i.e. use /data for long-term bulk storage. So, as noted above, format the 2 x 2TB drives with btrfs and mount them as /data. for convenience, you can make /data owned by your user and symlink it into your home directory (which will let you access it as /data and/or as /home/yourusername/data): sudo chown yourusername:yourgroupname /data sudo chmod 664 /data ln -s /data/ /home/yourusername/ When you restore your data from your old 1TB HDD, remember to copy it to subdirectories under /data, rather than under /home. BTW, if there's any possibility that you might want to use some of the space on /data for something not directly related to or belonging to your user (e.g. if you have a second user on the machine, or want to use it for a squid cache or web site or to store VM images or whatever), then don't use the top level of /data directly. use a subdirectory with, e.g., the same name as your user name. i.e. instead of the commands above, try something like this instead: sudo mkdir /data/yourusername sudo chown yourusername:yourgroupname /data/yourusername sudo chmod 664 /data/yourusername ln -s /data/yourusername /home/yourusername/data I recommend doing this anyway even if you don't think you'll need it. It doesn't hurt to have it, and if you ever change your mind it's already set up to make it easy to use for other purposes.
I am in Guided Partitioning at present, next step is Configure Sotware RAID
Then Configure the Logical Volume Manager
Then configure encrypted volumes
Then configure iSCSI volumes
Ignore all that. You don't need LVM or iscsi, and I'm guessing you don't care about or want the complications of full disk encryption. Just set up 2 btrfs filesystems, one for the rootfs, the other for /home/ craig -- craig sanders <cas@taz.net.au>

Hi Craig, Thanks for the advice, I cleared the partition tables and eventually worked out how to create new partitions and set the filesystems up. For me the partitioning tool in Ubuntu is a quantum leap behind the partition manager in Mandrake/Mandriva, it is a pity that that graphical partition manager was not used universally. I am grateful for the assistance because now I have a system running, but it is running from a tty. I ran the command startx to get a graphical screen up, the system suggested an install of xinit, and that is loading at present. Unfortunately I am stuck in tty1, I thought that the GUI was on tty7, but I have forgotten how to get there. I thought it used to be CTRL ALT F7 I have a system, now, but not usable by me at this stage. Cheers Andrew On 22/2/19 11:38 pm, Craig Sanders via luv-main wrote:
On Fri, Feb 22, 2019 at 08:20:48PM +1100, Andrew Greig wrote:
The 1Tb is an SSD for speed and I have another 2 x 2Tb drives for my data. After 3 years of photography and 13,000 images in raw, proofs and full size jpgs I have around 500Gb of data. This should meet my needs for 2 years at least at which time I will build a bigger machine.
I am in the partitioner at present, manual chosen,
I want root on the SSD
LVM VG ubuntu-vg LV root - 2.0TB Linux device-mapper (linear) is what I am presented with
so do I need to change root to home?
LVM VG ubuntu-vg, LV swap_1 - 1.0 GB Linux device-mapper (linear) You don't need LVM if you're using btrfs, it doesn't give you anything that btrfs doesn't - it'll just make your disk management more complicated. Delete the partition tables from all 3 drives and create them manually.
1. sda (1 TB SSD)
You'll need a partition for EFI (optional), a swap partition and a btrfs partition for the root fs. 4 or 8GB should be plenty for swap. the btrfs partition should be the remainder of the disk.
If you're motherboard is old-style BIOS rather than UEFI, you don't need a FAT32 partition.
sda (if BIOS):
4-8GB swap remainder for btrfs root fs
sda (if UEFI or if you think you might move this disk to a UEFI machine in future):
512 MB EFI partition 4-8GB swap remainder for btrfs root fs
Setting this up with btrfs now gives you the option of easily converting it to raid-1 later. just add an identical drive, partition it exactly the same, and tell btrfs to add the new partition to the existing one. btw, because the second drive has identical partitioning, you'll have another free partition the same size as your swap partition. you can use that for more swap, or format it and use it for /tmp or something. i'd just add it as more swap.
Using btrfs for the root fs also allows you to use btrfs snapshots, and btrfs send for backups.
2. both sdb and sdc (2 x 2TB HDD):
1 big partition for btrfs /data.
the installer should ask you where you want to mount this (/data) when you set it up.
So how do I partition this so that root and boot are on the 1.0TB SSD You don't really need a separate partition for /boot, it works just fine as a subdirectory of /. Old-timers like me only do that out of habit from the days when it was useful to do so.
and so that /home is the RAID array of two disks of 2TB each? I'd leave /home on the SSD - it's fast, and it's much bigger than you need for the OS. Having all your config files and browser cache and the data you're currently working with on the SSD will be a huge performance boost.
Use /home on your SSD as fast working space (editing your images and videos on the SSD will be MUCH faster than editing them on the HDD), and move the completed work to subdirectories under /data - i.e. use /data for long-term bulk storage.
So, as noted above, format the 2 x 2TB drives with btrfs and mount them as /data.
for convenience, you can make /data owned by your user and symlink it into your home directory (which will let you access it as /data and/or as /home/yourusername/data):
sudo chown yourusername:yourgroupname /data sudo chmod 664 /data ln -s /data/ /home/yourusername/
When you restore your data from your old 1TB HDD, remember to copy it to subdirectories under /data, rather than under /home.
BTW, if there's any possibility that you might want to use some of the space on /data for something not directly related to or belonging to your user (e.g. if you have a second user on the machine, or want to use it for a squid cache or web site or to store VM images or whatever), then don't use the top level of /data directly. use a subdirectory with, e.g., the same name as your user name.
i.e. instead of the commands above, try something like this instead:
sudo mkdir /data/yourusername sudo chown yourusername:yourgroupname /data/yourusername sudo chmod 664 /data/yourusername ln -s /data/yourusername /home/yourusername/data
I recommend doing this anyway even if you don't think you'll need it. It doesn't hurt to have it, and if you ever change your mind it's already set up to make it easy to use for other purposes.
I am in Guided Partitioning at present, next step is Configure Sotware RAID
Then Configure the Logical Volume Manager
Then configure encrypted volumes
Then configure iSCSI volumes Ignore all that. You don't need LVM or iscsi, and I'm guessing you don't care about or want the complications of full disk encryption.
Just set up 2 btrfs filesystems, one for the rootfs, the other for /home/
craig
-- craig sanders <cas@taz.net.au> _______________________________________________ luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

On Sat, Feb 23, 2019 at 02:30:46PM +1100, Andrew Greig wrote:
Unfortunately I am stuck in tty1, I thought that the GUI was on tty7, but I have forgotten how to get there. I thought it used to be CTRL ALT F7
If you have a display manager (xdm, gdm, kdm, lightdm, etc) installed, it will start up automatically and give you a graphical login.
I have a system, now, but not usable by me at this stage.
Did you install gnome or kde (or xfce or whatever desktop environment you prefer)? And all the GUI apps you intend to use? This may be because you chose to do a "server" install. I have no idea what Ubuntu actually means by that, but I'd guess it doesn't include X or other GUI stuff because they're generally not needed on "servers". But it's not a big problem, nothing to worry about. You can always apt-get install whatever you need, whether you chose a "desktop" or a "server" install. It's the same OS, just with different programs installed by default. craig -- craig sanders <cas@taz.net.au>

Well, The good new is that I found a command on YouTube $ sudo tasksel ran that with several choices I am now sending this email from my GUI based desktop. copied the .Thunderbird hidden folder to my /home/andrew and after 4.6Gb had transferred I started Thunderbird and all of my email addresses are exactly as they were yesterday. Overjoyed, the learning curve was steep, but worth it. Thank you very much for your assistance. Now I need to plug in my old SATA drive and copy my data to one of my data drives. Small thing, when I was setting the partitions the system did not like /data on two separate drives so for the moment one is /data0 and the othe is /data1. If I load one of the drives with my data, as soon as RAID is setup will that data copy across to the other drive? And the next step is RAID Thanks again Andrew On 23/2/19 3:07 pm, Craig Sanders via luv-main wrote:
On Sat, Feb 23, 2019 at 02:30:46PM +1100, Andrew Greig wrote:
Unfortunately I am stuck in tty1, I thought that the GUI was on tty7, but I have forgotten how to get there. I thought it used to be CTRL ALT F7
If you have a display manager (xdm, gdm, kdm, lightdm, etc) installed, it will start up automatically and give you a graphical login.
I have a system, now, but not usable by me at this stage.
Did you install gnome or kde (or xfce or whatever desktop environment you prefer)? And all the GUI apps you intend to use?
This may be because you chose to do a "server" install. I have no idea what Ubuntu actually means by that, but I'd guess it doesn't include X or other GUI stuff because they're generally not needed on "servers".
But it's not a big problem, nothing to worry about. You can always apt-get install whatever you need, whether you chose a "desktop" or a "server" install. It's the same OS, just with different programs installed by default.
craig
-- craig sanders <cas@taz.net.au> _______________________________________________ luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

On Sat, Feb 23, 2019 at 03:42:57PM +1100, Andrew Greig wrote:
Now I need to plug in my old SATA drive and copy my data to one of my data drives.
Small thing, when I was setting the partitions the system did not like /data on two separate drives so for the moment one is /data0 and the othe is /data1. If I load one of the drives with my data, as soon as RAID is setup will that data copy across to the other drive?
And the next step is RAID
Well, kind of. Not if you're talking about using mdadm for RAID-1. btrfs does its own raid. and volume management. There's no need for mdadm or lvm or anything else. If you've used btrfs for those drives then what you need to do is: 1. unmount both of them 2. remount ONE of them (say, data0) as /data (and edit /etc/fstab so that it gets mounted as /data on every reboot. also delete the line in fstab that mounts data1). 3. destroy the partition table on the data1 drive, and recreate it (again, one big partition for the entire disk[1]) 4. add that drive to the existing btrfs array on /data e.g. *IF* /data1 was sdc1, you'd do something like: sudo btrfs device add -f /dev/sdc1 /data sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 /data The earlier you do this (i.e. the less data is already on it), the faster this conversion to raid1 will be. Nearly instant if there's little or no data. Much longer if there's a lot of data that needs to be synced to the other drive. i.e. best to do it before copying the data from your old drive. [1] technically, you don't need a partition, btrfs can use the entire disk. but IMO a partition table is useful for clearly identifying that a disk is in use and what it is being used for. It doesn't hurt in any way to have one and the space used by the partition table is trivial - at most, a sector for the partition table itself and another 2047 sectors[2] to ensure that the first sector of the first (and only) partition is aligned at a 4K sector boundary. i.e. 1MB out of your 2TB drive. [2] it's not uncommon on disks with GPT partition tables (instead of the old style ms-dos partition tables) to create a tiny partition in that area with type EF02 for grub, especially if they're ever going to be used to boot grub. craig -- craig sanders <cas@taz.net.au>

On 23/2/19 5:16 pm, Craig Sanders via luv-main wrote:
On Sat, Feb 23, 2019 at 03:42:57PM +1100, Andrew Greig wrote:
Now I need to plug in my old SATA drive and copy my data to one of my data drives.
Small thing, when I was setting the partitions the system did not like /data on two separate drives so for the moment one is /data0 and the othe is /data1. If I load one of the drives with my data, as soon as RAID is setup will that data copy across to the other drive?
And the next step is RAID
Well, kind of. Not if you're talking about using mdadm for RAID-1. btrfs does its own raid. and volume management. There's no need for mdadm or lvm or anything else. If you've used btrfs for those drives then what you need to do is:
1. unmount both of them $sudo umount /dev/sdb1 && umount /dev/sdc1 ?
2. remount ONE of them (say, data0) as /data (and edit /etc/fstab so that it gets mounted as /data on every reboot. also delete the line in fstab that mounts data1).
Here is my current fstab (please note, partition manager took me an hour and a half to negotiate and I was unable to install swap on my SSD so I put a swap partition on each of the two SATA drives so that they would be exactly the same size. ) andrew@andrew:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda2 during installation UUID=d8e97417-7029-4f22-87ed-df529ac19614 / btrfs defaults,subvol=@ 0 1 # /data0 was on /dev/sdb2 during installation UUID=0e8718c8-03bf-4f1a-915f-df03fe117dc0 /data0 btrfs defaults 0 2 # /data1 was on /dev/sdc2 during installation UUID=5969127b-f5e0-40dc-98ba-ea7252c9ee41 /data1 btrfs defaults 0 2 # /efi was on /dev/sda1 during installation UUID=b588608e-8cf7-43be-8a53-03dfde6f8f15 /efi btrfs defaults 0 2 # /home was on /dev/sda2 during installation UUID=d8e97417-7029-4f22-87ed-df529ac19614 /home btrfs defaults,subvol=@home 0 2 # swap was on /dev/sdb1 during installation UUID=ad17f0bf-978c-4905-b421-2113b7eb5ba9 none swap sw 0 0 # swap was on /dev/sdc1 during installation UUID=dba5db92-eee2-4633-ba6c-86b68bc2d957 none swap sw 0 0 andrew@andrew:~$
3. destroy the partition table on the data1 drive, and recreate it (again, one big partition for the entire disk[1]) So by deleting the partition we eliminate the FS (btrfs) and in the addition step the FS is rebuilt?? but specifically to control both disks?
Can /dev/sdc2 can be deleted with gparted?
4. add that drive to the existing btrfs array on /data
e.g. *IF* /data1 was sdc1, you'd do something like:
sudo btrfs device add -f /dev/sdc1 /data sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 /data
The earlier you do this (i.e. the less data is already on it), the faster this conversion to raid1 will be. Nearly instant if there's little or no data. Much longer if there's a lot of data that needs to be synced to the other drive.
i.e. best to do it before copying the data from your old drive.
I have about 4Gb only of data from this morning's photo shoot, I can move that back to /home/andrew easily enough. I just tried the Data drive to see how my CHOWN went. ( I cheat, I use mc)
[1] technically, you don't need a partition, btrfs can use the entire disk. but IMO a partition table is useful for clearly identifying that a disk is in use and what it is being used for. It doesn't hurt in any way to have one and the space used by the partition table is trivial - at most, a sector for the partition table itself and another 2047 sectors[2] to ensure that the first sector of the first (and only) partition is aligned at a 4K sector boundary. i.e. 1MB out of your 2TB drive.
[2] it's not uncommon on disks with GPT partition tables (instead of the old style ms-dos partition tables) to create a tiny partition in that area with type EF02 for grub, especially if they're ever going to be used to boot grub.
craig
-- craig sanders <cas@taz.net.au> _______________________________________________ luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

On Sat, Feb 23, 2019 at 06:32:43PM +1100, Andrew Greig wrote:
1. unmount both of them
$sudo umount /dev/sdb1 && umount /dev/sdc1 ?
or "sudo umount /data0 /data1" as long as no process has any file open under those directories (and that includes having a shell with it's current working directory in either of them - you can't unmount a filesystem that is being actively used), both directories will be unmounted.
2. remount ONE of them (say, data0) as /data (and edit /etc/fstab so that it gets mounted as /data on every reboot. also delete the line in fstab that mounts data1).
Here is my current fstab (please note, partition manager took me an hour and a half to negotiate and I was unable to install swap on my SSD so I put a swap partition on each of the two SATA drives so that they would be exactly the same size. )
That's a shame because swap on SSD is much faster than swap on HDD. Of course when you upgrade your RAM, it probably won't swap much. Once you get your RAM upgrade installed, I strongly recommend that you install libvirt and virt-manager and create some VMs to play with. e.g. make a VM and give it three 5GB disk image files (i.e. similar to your current system with three drives). Then install ubuntu onto it. you can mess around with the partition manager (or even fdisk on the command line) until you understand how it works without risking anything on your real system. and try different variations on the build (e.g. install ubuntu onto one of the VM's virtual disks, boot it up, and then manually partition the other two virtual disks and aformat them with btrfs and add them to fstab. and experiment also with other filesystems and/or mdadm and/or lvm2 if you like). That's one of the things VMs are good for, to experiment and test things and especially to learn. In fact, they're an excellent way to learn stuff. Things like partition management and formatting partitions are hard and a bit scary because they are things that are very rarely done by most people - only when building a new machine or adding new drives to a machine. Practice is the only thing that will make it familiar and comfortable. Do this every few months to keep the memory fresh so that you will know what to do and how to do it if/when you ever need to.
# /data0 was on /dev/sdb2 during installation UUID=0e8718c8-03bf-4f1a-915f-df03fe117dc0 /data0 btrfs defaults 0 2
edit this line, change data0 to data.
# /data1 was on /dev/sdc2 during installation UUID=5969127b-f5e0-40dc-98ba-ea7252c9ee41 /data1 btrfs defaults 0 2
delete or comment out this line. then, save & exit, and run "sudo mount /data"
# /efi was on /dev/sda1 during installation UUID=b588608e-8cf7-43be-8a53-03dfde6f8f15 /efi btrfs defaults 0 2
the EFI partition should be FAT32. UEFI can't use btrfs. I guess that means it's not being used at all - your machine is either old-fashioned BIOS or, if UEFI, it's configured for legacy (BIOS) boot.
3. destroy the partition table on the data1 drive, and recreate it (again, one big partition for the entire disk[1])
So by deleting the partition we eliminate the FS (btrfs) and in the addition step the FS is rebuilt?? but specifically to control both disks?
No, it's just deleting and re-creating the partition. creating a partition and formatting it are two different things. A partition is just a chunk of disk space reserved for some particular use. That use can be to be formatted as one of several different filesystems (ext4, xfs, btrfs, fat32, etc etc), to be used as swap space, for an lvm physical volume (PV), or just left unused. But now that i know you've got a swap partition on there, DON'T DELETE THE ENTIRE PARTITION TABLE. Just delete /dev/sdc2. better yet, don't bother deleting it at all, this step can be skipped. You can actually skip step 3 entirely: the '-f' option used in step 4 ('btrfs device add -f ...') should force it to use /dev/sdc2 even though it is already formatted as btrfs.
Can /dev/sdc2 can be deleted with gparted?
yes.
4. add that drive to the existing btrfs array on /data
e.g. *IF* /data1 was sdc1, you'd do something like:
sudo btrfs device add -f /dev/sdc1 /data sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 /data
change sdc1 here to sdc2.
The earlier you do this (i.e. the less data is already on it), the faster this conversion to raid1 will be. Nearly instant if there's little or no data. Much longer if there's a lot of data that needs to be synced to the other drive.
i.e. best to do it before copying the data from your old drive.
I have about 4Gb only of data from this morning's photo shoot, I can move that back to /home/andrew easily enough. I just tried the Data drive to see how my CHOWN went. ( I cheat, I use mc)
No need. 4GB of data will be synced in very little time. craig -- craig sanders <cas@taz.net.au>

Hi Craig, Referring to an earlier message about my data drives, do I need to CHOWN those drives to andrew:andrew and then set the permissions to rwx? I think you mentioned a symlink, would that be necessary if I have done the CHOWN? How do I set up the RAID1 on the Data0 and Data1 drives, please? I have btrfs on all drives. I am amazed at the speed of an SSD. I will pick up the RAM and a cradle for the SSD as it does not fit anywhere in my case. It is just sitting in there at present. Thanks very much Andrew

On Sat, Feb 23, 2019 at 04:26:25PM +1100, Andrew Greig wrote:
Referring to an earlier message about my data drives, do I need to CHOWN those drives to andrew:andrew and then set the permissions to rwx?
I think i said perms should be 664. that was wrong. the execute bit is needed to access a directory, so it should be 775 (rwxrwxr-x). 770 (rwxrwx---) would also work if you didn't want any other accounts on the system (other than root and andrew, and any accounts that you add to group andrew) to access it. the chown and chmod commands need to be run so that your user is able to read and write to the /data directory. Otherwise it'll be owned by root and only writable by root. NOTE: the chown and chmod need to be done while /data is mounted. This only needs to be done once, and will retain the owner & permissions metadata whenever it is remounted (e.g. on a reboot). if you do the chown & chmod while the /data fs isn't mounted, you'll only be changing the permissions of the empty mount-point directory, not of the filesystem.
I think you mentioned a symlink, would that be necessary if I have done the CHOWN?
the symlink was for convenience only. useful but not necessary. mostly so that you can just navigate to your home dir and double-click on the symlink in any GUI file chooser dialog. or from the command line "cd ~/data".
How do I set up the RAID1 on the Data0 and Data1 drives, please?
see my previous message. you should have only a /data fs combining both the 2TB drives into a single btrfs raid1 array.
I have btrfs on all drives. I am amazed at the speed of an SSD.
Yeah, they're bloody fast, aren't they? and NVME SSDs are even faster.
I will pick up the RAM and a cradle for the SSD as it does not fit anywhere in my case. It is just sitting in there at present.
There are no moving parts in an SSD, so it's safe to leave it just hanging loose indefinitely until you get a cradle for it. I wouldn't do that for a HDD except in some of data-recovery emergency, but it's not a problem for an SSD. craig -- craig sanders <cas@taz.net.au>

Very happy vegemite atm, I disconnected my optical drive so I could hook up my old SATA HDD. Well, it was found by the system and automounted. I was getting ready for a mount operation. No Need. I am loading my SSD at present and then I will put a bit of data in the /Data directory to see how the balancing deal goes. When that is OK I will just dump the rest of the data. Thank you very much. Truly grateful Andrew On 23/2/19 5:31 pm, Craig Sanders via luv-main wrote:
On Sat, Feb 23, 2019 at 04:26:25PM +1100, Andrew Greig wrote:
Referring to an earlier message about my data drives, do I need to CHOWN those drives to andrew:andrew and then set the permissions to rwx?
I think i said perms should be 664. that was wrong. the execute bit is needed to access a directory, so it should be 775 (rwxrwxr-x).
770 (rwxrwx---) would also work if you didn't want any other accounts on the system (other than root and andrew, and any accounts that you add to group andrew) to access it.
the chown and chmod commands need to be run so that your user is able to read and write to the /data directory. Otherwise it'll be owned by root and only writable by root.
NOTE: the chown and chmod need to be done while /data is mounted. This only needs to be done once, and will retain the owner & permissions metadata whenever it is remounted (e.g. on a reboot).
if you do the chown & chmod while the /data fs isn't mounted, you'll only be changing the permissions of the empty mount-point directory, not of the filesystem.
I think you mentioned a symlink, would that be necessary if I have done the CHOWN?
the symlink was for convenience only. useful but not necessary. mostly so that you can just navigate to your home dir and double-click on the symlink in any GUI file chooser dialog. or from the command line "cd ~/data".
How do I set up the RAID1 on the Data0 and Data1 drives, please?
see my previous message. you should have only a /data fs combining both the 2TB drives into a single btrfs raid1 array.
I have btrfs on all drives. I am amazed at the speed of an SSD.
Yeah, they're bloody fast, aren't they? and NVME SSDs are even faster.
I will pick up the RAM and a cradle for the SSD as it does not fit anywhere in my case. It is just sitting in there at present.
There are no moving parts in an SSD, so it's safe to leave it just hanging loose indefinitely until you get a cradle for it. I wouldn't do that for a HDD except in some of data-recovery emergency, but it's not a problem for an SSD.
craig
-- craig sanders <cas@taz.net.au> _______________________________________________ luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

On Fri, Feb 22, 2019 at 05:33:53PM +1100, Andrew Greig wrote:
I have purchased a new 1Tb SSD and I have two unused SATA 2Tb drives, and currently 8Gb RAM (max capacity 32Gb DDR3 1866) I will settle for 24Gb soon.
24GB is nice. With that and the SSD, you should see an enormous boost in performance. No more twiddling your thumbs waiting for it to boot. of course, not long after you get used to the new speed, it'll start to seem unbearably slow :)
MB = ASRock 890 GM Pro3 5 sata slots
I guess that means you have a Phenom II CPU or maybe one of the early FX series chips. Nice CPUs for their day, and still pretty good even today. most of my machines have these. If you have an FX CPU, they're happiest with DDR3-1866 RAM. DDR3 is slowly disappearing from the market so you have to get what's available - other speeds will work if you can't get 1866, but 1866 is optimal. BTW, if you're not sure exacly what CPU you have, run 'lscpu | grep Model.name'. You'll see output like this: # lscpu | grep Model.name Model name: AMD FX(tm)-8320 Eight-Core Processor or # lscpu | grep Model.name Model name: AMD Phenom(tm) II X6 1090T Processor
Question - Should I choose Ubuntu 18.04 LTS or install 18.10 which will need an upgrade at the end of July?
It really depends on whether you want to upgrade every 6 to 12 months (18.10), or every two years (LTS). Stuff like gimp and darkroom tend to be fairly fast moving, so upgrading them every six months or so is probably a good idea. I'm generally in favour of keeping systems upgraded regularly. IMO two years is two long between upgrades. Free Software development moves way too fast for that. craig PS: what kind of GPU do you have? if you do a lot of graphical work, it may be worthwhile comparing some of the current low-end to mid-range models to your current card. A modern $200-$300 GPU should be 2 to 3 times faster than, e.g., a high-end GPU from 5 years ago, and use significantly less power. but this is definitely something that needs significant research before buying anything. googling "old model name vs new model name" gets good results. e.g. "gtx-560 vs gtx-1050" leads to several review sites which say that the 1050 (~ $170) is roughly 87% (1.87x) faster than the 560, and uses only 75 Watts rather than 150 W. The next model up, a "1050 Ti" is a bit over twice as fast and costs about $200, also using 75W. and the GTX-1060 3GB model is about 3.65 times as fast as a GTX-560 and costs about $250 (using 120 W) BTW, "2-3 times as fast as what I currently have for $200-$300" is generally what I wait for when upgrading my GPU. Unless noise and power usage is a problem, it's not really worth the cost of upgrading for anything less. Sometimes, though, new features of newer cards (like better video decoding or newer opengl/vulkan) makes it worth upgrading earlier. There are various AMD Radeon models of simialar performance and price. Unless you're willing to use the proprietary nvidia driver, you're better off with an AMD GPU. their open source driver is much better than the open source nouveau driver for nvidia. I mostly use nvidia cards with the proprietary nvidia driver (the AMD fglrx driver always sucked and the open source drivers for both amd and nvidia used to suck. now they're kind of decent, especially the AMD driver, unless you do a lot of 3D gaming at 1440p or better with all the pretty turned up to Ultra) -- craig sanders <cas@taz.net.au>

In regard to the hardware advice. The LUV hardware library often has DDR3 RAM for free, but 4G modules don't hang around long. If anyone is upgrading from a DDR3 system to DDR4 please donate your old RAM as lots of people have a use for this. Also we need more SATA disks, if anyone has disks of 300G+ that they don't need then please donate them. -- Sent from my Huawei Mate 9 with K-9 Mail.

On Fri, Feb 22, 2019 at 10:22:38AM +1100, Russell Coker wrote:
In regard to the hardware advice. The LUV hardware library often has DDR3 RAM for free, but 4G modules don't hang around long. If anyone is upgrading from a DDR3 system to DDR4 please donate your old RAM as lots of people have a use for this.
When I get around to upgrading my systems to use DDR-4, I'll have a bunch of 8GB DDR-3 sticks to donate (with speeds ranging from DDR3-1333 to DDR3-1866). That won't be for some time, though. my current plan is to merge my mythtv box (FX-8150, 16GB RAM) and my file/dns/web/kvm/everything-server (phenom ii 1090T, 32GB RAM) into a single threadripper 2920x or 2950x machine with at least 64GB (not because i need that many CPU cores, but because I really need the PCI-e lanes...Ryzen 5 & 7 only have 20 lanes, which is not enough for GPU+DVB cards+SAS cards. Threadripper has 64 lanes). I can't afford to do that any time soon, though. Even if i could find somewhere that had the last-gen 8-core 1900x TR4 in stock (around $450, in theory, if available, vs the ~ $1000 for 2920x or ~ $1400 for 2950x), 64GB of new DDR-4 RAM would cost around $800 and a good X399 motherboard to suit would cost around $400, for a minimum build cost of $1650 or so.
Also we need more SATA disks, if anyone has disks of 300G+ that they don't need then please donate them.
Don't have any spare old drives, though. I use them until they die. craig -- craig sanders <cas@taz.net.au>

On Sat, Feb 16, 2019 at 01:02:44PM +1100, Andrew Greig wrote:
I shut it down and removed the 2 SATA cables from the MB and booted up - successfully.
I didn't notice this before. You can edit /etc/fstab to change to UUIDs or LABELs at this point. Then shutdown, add the new drives, and turn it back on. craig -- craig sanders <cas@taz.net.au>
participants (8)
-
Andrew Greig
-
Andrew Voumard
-
bnis@fastmail.fm
-
Craig Sanders
-
Mark Trickett
-
Morrie Wyatt
-
Nic Baxter
-
Russell Coker