
Firstly, can you please configure your thunderbird mail client to NOT send HTML mail? Or at least send both HTML and plain text? HTML mail really screws up the quoting, making it very hard to tell what's quoted and what's new. Also, don't bottom-post. Bottom posting is evil. And please trim your quotes to the bare minimum required to provide context for your response - no-one wants to read the same quoted messages over and over again just because you couldn't be bothered editing your messages properly. It tells the reader "I don't care about wasting YOUR time, as long as I save myself a few precious seconds". On Sun, Feb 17, 2019 at 02:08:13AM +1100, Andrew Greig via luv-main wrote:
This my /etc/fstab
andrew@andrew-desktop:~$ sudo cat /etc/fstab
You don't need sudo to read /etc/fstab, only to edit it. it's RW by root, RO by everyone else.
# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/ubuntu--vg-root / ext4 errors=remount-ro 0 1 /dev/mapper/ubuntu--vg-swap_1 none swap sw 0 0
andrew@andrew-desktop:~$ blkid /dev/sda1: UUID="sI0LJX-JSme-W2Yt-rFiZ-bQcV-lwFN-tSetH5" TYPE="LVM2_member" PARTUUID="92e664e1-01" /dev/mapper/ubuntu--vg-root: UUID="b0738928-9c7a-4127-9f79-99f61a77f515" TYPE="ext4"
If you're running LVM then you don't need to (and shouldn't, see below) use UUIDs to mount your filesystem. The device mapper entries provide the same kind of consistency and uniqueness as a LABEL. You shouldn't use UUIDs when mounting LVM volumes because any snapshots of that fs will have the same UUID unless you change the snapshot's UUID with something like 'tune2fs -U random' (ext4) or 'xfs_admin -U generate' (xfs).
after hot plugging the two drives (I chose to try this to see if they would be picked up and configured in the same way as a USB key is detected. it seems that sdb and sdc have been detected
dmesg gives this:
[ 279.911371] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 279.912343] ata5.00: ATA-9: ST2000DM006-2DM164, CC26, max UDMA/133 [ 279.912349] ata5.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA [ 279.913799] scsi 4:0:0:0: Direct-Access ATA ST2000DM006-2DM1 CC26 PQ: 0 ANSI: 5 ... [ 331.750805] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 331.751777] ata4.00: ATA-9: ST2000DM006-2DM164, CC26, max UDMA/133 [ 331.751784] ata4.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA [ 331.753212] scsi 3:0:0:0: Direct-Access ATA ST2000DM006-2DM1 CC26 PQ: 0 ANSI: 5
Since the drives have not been partitioned or formatted, should I just download the latest Ubuntu and install as a server, with the two drives taking up a RAID config?
Or could I just run gparted and partition and format those disks alone?
I don't see any reason why you'd want to re-install the OS just to add some drives. how you partition and format them depends on what you want to do with them. your two main options are to: 1. Add them as new physical volumes to your existing LVM volume group. This would allow you to expand any existing filesystems and/or create new logical volumes to format and mount (e.g. you could create a new lv, format it with xfs or ext4, and mount it as /media to store video & music files) 2. Partition, format, and mount as completely separate filesystem(s). e.g. if you just want somewhere to store video or music files. This could be done using any filesystem, with or without RAID (either via mdadm, or by creating a new LVM volume group, or even with btrfs or zfs) I'd guess that the only reason you're using LVM is because that was the default option when you first installed Ubuntu. It doesn't seem like you're familiar enough with it to have chosen it deliberately. IMO unless you know LVM well, you're generally better off with btrfs - like ZFS, it's a filesystem that has the features of software RAID and volume-management built in, and is much easier to use than dealing with mdadm + lvm2 + filesystem utilities separately. BTW, you may be tempted to use some variant of RAID-0 (linear append or striped) to combine the two 3TB drives into one 6TB filesystem. Don't do that unless you're willing to risk that a single drive failure will lose *everything* stored on that 6TB. RAID-0 is NOT safe to use for any data of any importance. The only reason to use it is if you need a large amount of fast storage for temporary files....and an SSD will be much faster than that anyway. (NOTE: data stored on striped raid-0 is effectively unrecoverable in case of a single drive failure. With linear append, recovery of most of the data stored on the non-failed drive is a PITA but possible. Striped gives a performance boost as reads and writes are spread across both drives so is roughly twice as fast as a single drive. Linear does not, as data is written first to one drive and then onto the second drive when the first fills up) So, either use RAID-1 (giving you a total of 3TB of usable space, with everything mirrored on both drives for redundancy/safety) or two separate filesystems of 3TB each.
I am puzzled by the almost empty fstab - when I was running OpenSuse the fstab was quite large.
it's not something to worry about. The size of /etc/fstab depends on how many filesystems and swap-devices needed to be auto-mounted at boot. your previous suse system was probably partitioned to have separate filesystems for /, /home, /usr, /var, /tmp and/or other common mountpoints. This was common practice back when drives were small (filesystems were often actually on separate drives, not just partitions), but is uncommon and not recommended these days. The hassles involved in having multiple small partitions (largely the risk of running out of space on one partition while still having plenty free on other partitions) tend to greatly outweigh the minor benefits. craig -- craig sanders <cas@taz.net.au>