
On Sun, May 20, 2018 at 10:40:23AM +1000, Andrew Greig wrote:
I have a more fundamental issue now, the installer has named my former sda (in a one hdd system) to sdc.
Disk device names ARE NOT GUARANTEED TO REMAIN THE SAME ACROSS REBOOTS. Lots of different things can affect this - including adding drives, removing drives, adding or removing other hardware, minor variations in the timing of exactly when drives are detected by the BIOS or kernel (e.g. sometimes a disk might take a few milliseconds longer to spin up on a cold boot), upgrading the kernel, changes in the order that kernel modules are loaded, and more. This is normal linux behaviour and is precisely why it has been recommended for years now to NEVER USE THE DEVICE NAMES (/dev/sda, /dev/sdb, /dev/sdc, etc) directly. ALWAYS use disk/partition/filesystem labels or UUIDs. UUIDs are unique but ugly and difficult for a human to distinguish - it's hard to remember what dbd8bc90-5be5-11e8-87db-0023cdb023b9 or dd8fdf5a-5be5-11e8-a5fb-0023cdb023b9 are supposed to be. Fortunately, you can assign labels to partitions or filesystems when you create them (or add one later), and these are much easier to read and use. Even if every time you've booted your machine, you've always seen the drives having the same device names, you still can't rely on them remaining the same in future. The very next boot could see them having different device names. The "fix" for this is to stop thinking of them as reliable, static names that will never change. They're not, and never will be. Think of them as temporary device names that the kernel assigned to the drives for this particular boot session and that it's only random chance that they seem to be relatively consistent for months or years on end.
My sda should be my 1Tb former OpenSuse, and the two new drives should be sdb and sdc.
Nope. Your /dev/sda is whatever the kernel says it is when it boots up. You can't expect it to be any specific drive because it can (and will) change on any reboot. Use the disk brand + model + serial numbers to identify the drives. Your 1TB drive should be easy to spot. The two 2TB drives are probably only distinguished by their serial numbers. BTW, modern linux systems populate a directory called /dev/disk/by-id/ with symlinks to the actual device names. These symlinks are typically named by the drive's interface type (e.g. "ata" or "scsi" or "nvme") and the brand/model/serial. e.g. this is a pair of Crucial MX300 SSDs: # ls -lF /dev/disk/by-id/ata* | grep -v -- part lrwxrwxrwx 1 root root 9 May 19 03:10 /dev/disk/by-id/ata-Crucial_CT275MX300SSD1_752A13CDF31A -> ../../sda lrwxrwxrwx 1 root root 9 May 19 03:10 /dev/disk/by-id/ata-Crucial_CT275MX300SSD1_752A13CDFB42 -> ../../sdb Those symlinks will always be the same, they uniquely and reliably identify the drives. The /dev/sda and /dev/sdb devices that they point to may change on any reboot and can not be relied upon in any way.
Is this just a case of correcting where it is plugged in?
I want my sda to be my OS and general storage, and sdb and sdc in RAID to store data from my photography.
So install options delete the whole sda and accept LVM then partition 1Gb for GRUB
at what point will I fire up mdadm?
If you're using LVM with RAID-1, you don't need mdadm - LVM can do RAID-1 itself. IIRC, it can't do RAID-5 or RAID-6, so you'd need mdadm for those. Personally, I'd use ZFS instead of LVM. and mdadm for the /boot partition. Ubuntu should have a ZFS option in their installer, they've supported ZFS installs for a a few years now. craig -- craig sanders <cas@taz.net.au>