
Hi All, So the time has come when I have backed up all my data, cleaned out the /home directory, and in the morning I should expect that all of my data in Dropbox has finished synching. i have downloaded and tested the Ubuntu 18.04 LTS and burned it to DVD. In the morning I will install the 2 new 2Tb HDDs , and load the DVD to launch myself into unfamiliar territory, so when I get to the partition stage of the process I will have 1 x 1Tb HDD for the system and /home and the 2 x 2Tb drives for the RAID. What should I do about partitioning? Should I use hard partitions say, 20Gb for the /root, should I use a /boot partition or just use the MBR? Is mdadm a part of this process or does it get involved later ? ZFS for the pair of drives? I am hoping that this will go very smoothly and quickly, leaving me the rest of the day for populating the RAID disks with data. One more thing, I have found OpenSuse a really poor distro for video, in my early days (Mandrake and Mandriva) really spoiled me because the Power Pack release ensured that everything was working. But Linux has really come a long way in AV and for Suse to have issues with Codecs is just a little tired. Do I have to jump through hoops in Ubuntu to watch an MP4 or a Matroska video? May thanks Andrew Greig

On 20/05/18 00:04, Andrew Greig via luv-main wrote:
What should I do about partitioning? Should I use hard partitions say, 20Gb for the /root, should I use a /boot partition or just use the MBR?
You don't really need a separate /boot partition any more unless you have an older BIOS that can only boot from the first part of the drive.
Is mdadm a part of this process or does it get involved later ? ZFS for the pair of drives?
If you use ZFS or btrfs, you probably don't want to use mdadm (software RAID) as well. Better to take advantage of the filesystem-level mirroring. If you use another filesystem (such as XFS, which is what I use on my 4T RAID-1 mirror on my desktop workstation) you will need to install and set up mdadm first, yes.
Do I have to jump through hoops in Ubuntu to watch an MP4 or a Matroska video?
Not if you select the "use third-party proprietary drivers and codecs" checkbox during the install - note that it's off by default, though. Or you can install VLC, which should come with all the codes you need. Hope that helps, Andrew

Thanks Andrew, Drives are in and Ubuntu 18.04 is installing, I am offered an LVM option will that mess with RAID? One other thing, will choosing btrfs orZFS just utilise the matched drives as a RAID pair? If I go XFS, as my file system for all drives, then will I need the install to complete, then run mdadm to set the RAID? I am so glad I am fresh for this. Regards Andrew Greig On 20/05/18 00:21, Andrew Pam wrote:
On 20/05/18 00:04, Andrew Greig via luv-main wrote:
What should I do about partitioning? Should I use hard partitions say, 20Gb for the /root, should I use a /boot partition or just use the MBR? You don't really need a separate /boot partition any more unless you have an older BIOS that can only boot from the first part of the drive.
Is mdadm a part of this process or does it get involved later ? ZFS for the pair of drives? If you use ZFS or btrfs, you probably don't want to use mdadm (software RAID) as well. Better to take advantage of the filesystem-level mirroring. If you use another filesystem (such as XFS, which is what I use on my 4T RAID-1 mirror on my desktop workstation) you will need to install and set up mdadm first, yes.
Do I have to jump through hoops in Ubuntu to watch an MP4 or a Matroska video? Not if you select the "use third-party proprietary drivers and codecs" checkbox during the install - note that it's off by default, though. Or you can install VLC, which should come with all the codes you need.
Hope that helps, Andrew

Hi I suggest the following. 1. Do not use ZFS unless you have ECC ram 2. btrfs has real issues in a number of area so unless you are very experienced I would not use it. The Ubuntu installer supports creating both mdadm raid and LVM so maybe the following. 1. Partition both drives with 2 partitions 1.1 BOOT at about 1G (I know people say its not really needed now but I still feel its easier to fix things if you have issues with one) 1.2 All the rest of the drive 2. Create Raids of the partitions as raid 1 3. Set the partition type on the BOOT and directory 4. Create Volume Group on the largest partition 5. Create Logical Volumes for ROOT, SWAP and if you really want HOME 6. Set the partition type on the LV ROOT, SWAP and HOME So why do it this way ? Well LVMs give a lot of options which as not available if there not there. This site has only a very simple example but give it a read http://tldp.org/HOWTO/LVM-HOWTO/benefitsoflvmsmall.html My 2 cents worth. Mike On 20/5/18 9:35 am, Andrew Greig wrote:
Thanks Andrew,
Drives are in and Ubuntu 18.04 is installing, I am offered an LVM option will that mess with RAID?
One other thing, will choosing btrfs orZFS just utilise the matched drives as a RAID pair?
If I go XFS, as my file system for all drives, then will I need the install to complete, then run mdadm to set the RAID?
I am so glad I am fresh for this.
Regards
Andrew Greig
On 20/05/18 00:21, Andrew Pam wrote:
On 20/05/18 00:04, Andrew Greig via luv-main wrote:
What should I do about partitioning? Should I use hard partitions say, 20Gb for the /root, should I use a /boot partition or just use the MBR? You don't really need a separate /boot partition any more unless you have an older BIOS that can only boot from the first part of the drive. Is mdadm a part of this process or does it get involved later ? ZFS for the pair of drives? If you use ZFS or btrfs, you probably don't want to use mdadm (software RAID) as well. Better to take advantage of the filesystem-level mirroring. If you use another filesystem (such as XFS, which is what I use on my 4T RAID-1 mirror on my desktop workstation) you will need to install and set up mdadm first, yes.
Do I have to jump through hoops in Ubuntu to watch an MP4 or a Matroska video? Not if you select the "use third-party proprietary drivers and codecs" checkbox during the install - note that it's off by default, though. Or you can install VLC, which should come with all the codes you need.
Hope that helps, Andrew
_______________________________________________ luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

Thanks Mike, I have a more fundamental issue now, the installer has named my former sda (in a one hdd system) to sdc. My sda should be my 1Tb former OpenSuse , and the two new drives should be sdb and sdc. Is this just a case of correcting where it is plugged in? I want my sda to be my OS and general storage, and sdb and sdc in RAID to store data from my photography. So install options delete the whole sda and accept LVM then partition 1Gb for GRUB at what point will I fire up mdadm? Cheers Andrew On 20/05/18 10:27, Mike O'Connor wrote:
Hi
I suggest the following. 1. Do not use ZFS unless you have ECC ram 2. btrfs has real issues in a number of area so unless you are very experienced I would not use it.
The Ubuntu installer supports creating both mdadm raid and LVM so maybe the following. 1. Partition both drives with 2 partitions 1.1 BOOT at about 1G (I know people say its not really needed now but I still feel its easier to fix things if you have issues with one) 1.2 All the rest of the drive 2. Create Raids of the partitions as raid 1 3. Set the partition type on the BOOT and directory 4. Create Volume Group on the largest partition 5. Create Logical Volumes for ROOT, SWAP and if you really want HOME 6. Set the partition type on the LV ROOT, SWAP and HOME
So why do it this way ? Well LVMs give a lot of options which as not available if there not there. This site has only a very simple example but give it a read http://tldp.org/HOWTO/LVM-HOWTO/benefitsoflvmsmall.html
My 2 cents worth.
Mike
On 20/5/18 9:35 am, Andrew Greig wrote:
Thanks Andrew,
Drives are in and Ubuntu 18.04 is installing, I am offered an LVM option will that mess with RAID?
One other thing, will choosing btrfs orZFS just utilise the matched drives as a RAID pair?
If I go XFS, as my file system for all drives, then will I need the install to complete, then run mdadm to set the RAID?
I am so glad I am fresh for this.
Regards
Andrew Greig
On 20/05/18 00:21, Andrew Pam wrote:
On 20/05/18 00:04, Andrew Greig via luv-main wrote:
What should I do about partitioning? Should I use hard partitions say, 20Gb for the /root, should I use a /boot partition or just use the MBR? You don't really need a separate /boot partition any more unless you have an older BIOS that can only boot from the first part of the drive.
Is mdadm a part of this process or does it get involved later ? ZFS for the pair of drives? If you use ZFS or btrfs, you probably don't want to use mdadm (software RAID) as well. Better to take advantage of the filesystem-level mirroring. If you use another filesystem (such as XFS, which is what I use on my 4T RAID-1 mirror on my desktop workstation) you will need to install and set up mdadm first, yes.
Do I have to jump through hoops in Ubuntu to watch an MP4 or a Matroska video? Not if you select the "use third-party proprietary drivers and codecs" checkbox during the install - note that it's off by default, though. Or you can install VLC, which should come with all the codes you need.
Hope that helps, Andrew
luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

On 20/05/18 10:40, Andrew Greig wrote:
I have a more fundamental issue now, the installer has named my former sda (in a one hdd system) to sdc.
My sda should be my 1Tb former OpenSuse , and the two new drives should be sdb and sdc.
Is this just a case of correcting where it is plugged in?
No. The drive names are not stable, they can change from version to version as a result of kernel code changes. Do not use /dev/sd* in your /etc/fstab, for example. Always use drive UUIDs instead. That also ensures that a RAID array will assemble correctly regardless of which SATA connectors you use.
at what point will I fire up mdadm?
Before you set up LVM. Hope that helps, Andrew

On Sun, May 20, 2018 at 10:40:23AM +1000, Andrew Greig wrote:
I have a more fundamental issue now, the installer has named my former sda (in a one hdd system) to sdc.
Disk device names ARE NOT GUARANTEED TO REMAIN THE SAME ACROSS REBOOTS. Lots of different things can affect this - including adding drives, removing drives, adding or removing other hardware, minor variations in the timing of exactly when drives are detected by the BIOS or kernel (e.g. sometimes a disk might take a few milliseconds longer to spin up on a cold boot), upgrading the kernel, changes in the order that kernel modules are loaded, and more. This is normal linux behaviour and is precisely why it has been recommended for years now to NEVER USE THE DEVICE NAMES (/dev/sda, /dev/sdb, /dev/sdc, etc) directly. ALWAYS use disk/partition/filesystem labels or UUIDs. UUIDs are unique but ugly and difficult for a human to distinguish - it's hard to remember what dbd8bc90-5be5-11e8-87db-0023cdb023b9 or dd8fdf5a-5be5-11e8-a5fb-0023cdb023b9 are supposed to be. Fortunately, you can assign labels to partitions or filesystems when you create them (or add one later), and these are much easier to read and use. Even if every time you've booted your machine, you've always seen the drives having the same device names, you still can't rely on them remaining the same in future. The very next boot could see them having different device names. The "fix" for this is to stop thinking of them as reliable, static names that will never change. They're not, and never will be. Think of them as temporary device names that the kernel assigned to the drives for this particular boot session and that it's only random chance that they seem to be relatively consistent for months or years on end.
My sda should be my 1Tb former OpenSuse, and the two new drives should be sdb and sdc.
Nope. Your /dev/sda is whatever the kernel says it is when it boots up. You can't expect it to be any specific drive because it can (and will) change on any reboot. Use the disk brand + model + serial numbers to identify the drives. Your 1TB drive should be easy to spot. The two 2TB drives are probably only distinguished by their serial numbers. BTW, modern linux systems populate a directory called /dev/disk/by-id/ with symlinks to the actual device names. These symlinks are typically named by the drive's interface type (e.g. "ata" or "scsi" or "nvme") and the brand/model/serial. e.g. this is a pair of Crucial MX300 SSDs: # ls -lF /dev/disk/by-id/ata* | grep -v -- part lrwxrwxrwx 1 root root 9 May 19 03:10 /dev/disk/by-id/ata-Crucial_CT275MX300SSD1_752A13CDF31A -> ../../sda lrwxrwxrwx 1 root root 9 May 19 03:10 /dev/disk/by-id/ata-Crucial_CT275MX300SSD1_752A13CDFB42 -> ../../sdb Those symlinks will always be the same, they uniquely and reliably identify the drives. The /dev/sda and /dev/sdb devices that they point to may change on any reboot and can not be relied upon in any way.
Is this just a case of correcting where it is plugged in?
I want my sda to be my OS and general storage, and sdb and sdc in RAID to store data from my photography.
So install options delete the whole sda and accept LVM then partition 1Gb for GRUB
at what point will I fire up mdadm?
If you're using LVM with RAID-1, you don't need mdadm - LVM can do RAID-1 itself. IIRC, it can't do RAID-5 or RAID-6, so you'd need mdadm for those. Personally, I'd use ZFS instead of LVM. and mdadm for the /boot partition. Ubuntu should have a ZFS option in their installer, they've supported ZFS installs for a a few years now. craig -- craig sanders <cas@taz.net.au>

Quoting Craig Sanders (cas@taz.net.au):
Fortunately, you can assign labels to partitions or filesystems when you create them (or add one later), and these are much easier to read and use.
Care to learn hour to make a Linux system go belly-up in a way field-proven to puzzle Linux experts for days? Simple: Accidentidally connect two disks with the same assigned disk label, and then attempt to boot it. This was posted a decade or so ago to the Silicon Valley Linux User Group by one of the leading experts who'd just solved the problem after being stumped by it for days. (I can't remember the exact signs of distress the system gave, if any, before falling over.) I didn't try to replicate the problem. I merely made a mental note that, IMO, this was an adequate reason to eschew disk labels completely: one fewer bizarre failure mode to watch out for. I always imagined that someone was handed an account of that shambles and told 'Please design for the Linux community a disc identifier system that avoids all such failure modes through the expedient of using an absolutely guaranteed, totally unique identifier. Feel free to sacrifice all other objectives such as ergonomics and human-compatibility. Just be absolutely certain the identifiers are unique, at any cost.' Eh volia! UUIDs.
changes in the order that kernel modules are loaded
FWIW, I generally compile my own kernel, and critical drivers get compiled inline. Never seen the above problem, and my kernel practices may help somewhat.
minor variations in the timing of exactly when drives are detected by the BIOS or kernel (e.g. sometimes a disk might take a few milliseconds longer to spin up on a cold boot)
Still have never seen that on my systems. Still waiting, since 1992. It may help that I favour relatively simple and homogeneous hardware.
BTW, modern linux systems populate a directory called /dev/disk/by-id/ with symlinks to the actual device names.
I look forward eventually to losing udev on server systems and migrating to mdev, which among other things will lose the above. ;->
If you're using LVM with RAID-1, you don't need mdadm - LVM can do RAID-1 itself.
But mdadm / the Linux md driver are superb at doing Linux software RAID, so IMO should be favoured as best of breed.

On Sunday, 20 May 2018 4:09:12 PM AEST Rick Moen via luv-main wrote:
This was posted a decade or so ago to the Silicon Valley Linux User Group by one of the leading experts who'd just solved the problem after being stumped by it for days. (I can't remember the exact signs of distress the system gave, if any, before falling over.)
I didn't try to replicate the problem. I merely made a mental note that, IMO, this was an adequate reason to eschew disk labels completely: one fewer bizarre failure mode to watch out for.
I always imagined that someone was handed an account of that shambles and told 'Please design for the Linux community a disc identifier system that avoids all such failure modes through the expedient of using an absolutely guaranteed, totally unique identifier. Feel free to sacrifice all other objectives such as ergonomics and human-compatibility. Just be absolutely certain the identifiers are unique, at any cost.' Eh volia! UUIDs.
You have to be careful when using dd on disk images as matching UUIDs cause problems. Also for RAID configurations using the DOS partition table you can use "dd if=sda of=sdb bs=1024k count=10" to copy the partition table and boot loader configuration to a new disk. But that causes big problems with GPT. As an aside does anyone know of a fdisk type program that makes it easy to copy the partition layout from one disk to another? -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 21/05/18 00:24, Russell Coker via luv-main wrote:
You have to be careful when using dd on disk images as matching UUIDs cause problems.
Also for RAID configurations using the DOS partition table you can use "dd if=sda of=sdb bs=1024k count=10" to copy the partition table and boot loader configuration to a new disk. But that causes big problems with GPT.
There are methods to change the UUID of volumes, create new UUIDs with the uuid tool (or some other equivalent). tune2fs /dev/{device} -U {uuid} cryptsetup --uuid {uuid} /dev/{device} - be careful, may need to adjust /etc/cryptab file and probably recreate initrd as well. I've not tested either of these though.... Cheers A.

Thanks Mike, I feel like I am in over my head ATM. I have swapped the SATA connections in an effort to have the system recognise my 1Tb drive as the system device. But it made no difference. Now I am thinking that I should unplug the 2 x 2Tb disks and install the system on sda1 with a 1Gb /boot and lvm selected. Once I have a working system, then replug the SATA drives and set up RAID. In the past I used to be able to define a partition table but Ubuntu is doing my head in. It offered to use my 1Tb drive as a swap partition. Cheers Andrew Greig On 20/05/18 10:27, Mike O'Connor wrote:
Hi
I suggest the following. 1. Do not use ZFS unless you have ECC ram 2. btrfs has real issues in a number of area so unless you are very experienced I would not use it.
The Ubuntu installer supports creating both mdadm raid and LVM so maybe the following. 1. Partition both drives with 2 partitions 1.1 BOOT at about 1G (I know people say its not really needed now but I still feel its easier to fix things if you have issues with one) 1.2 All the rest of the drive 2. Create Raids of the partitions as raid 1 3. Set the partition type on the BOOT and directory 4. Create Volume Group on the largest partition 5. Create Logical Volumes for ROOT, SWAP and if you really want HOME 6. Set the partition type on the LV ROOT, SWAP and HOME
So why do it this way ? Well LVMs give a lot of options which as not available if there not there. This site has only a very simple example but give it a read http://tldp.org/HOWTO/LVM-HOWTO/benefitsoflvmsmall.html
My 2 cents worth.
Mike
On 20/5/18 9:35 am, Andrew Greig wrote:
Thanks Andrew,
Drives are in and Ubuntu 18.04 is installing, I am offered an LVM option will that mess with RAID?
One other thing, will choosing btrfs orZFS just utilise the matched drives as a RAID pair?
If I go XFS, as my file system for all drives, then will I need the install to complete, then run mdadm to set the RAID?
I am so glad I am fresh for this.
Regards
Andrew Greig
On 20/05/18 00:21, Andrew Pam wrote:
On 20/05/18 00:04, Andrew Greig via luv-main wrote:
What should I do about partitioning? Should I use hard partitions say, 20Gb for the /root, should I use a /boot partition or just use the MBR? You don't really need a separate /boot partition any more unless you have an older BIOS that can only boot from the first part of the drive.
Is mdadm a part of this process or does it get involved later ? ZFS for the pair of drives? If you use ZFS or btrfs, you probably don't want to use mdadm (software RAID) as well. Better to take advantage of the filesystem-level mirroring. If you use another filesystem (such as XFS, which is what I use on my 4T RAID-1 mirror on my desktop workstation) you will need to install and set up mdadm first, yes.
Do I have to jump through hoops in Ubuntu to watch an MP4 or a Matroska video? Not if you select the "use third-party proprietary drivers and codecs" checkbox during the install - note that it's off by default, though. Or you can install VLC, which should come with all the codes you need.
Hope that helps, Andrew
luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

On 20/05/18 11:39, Andrew Greig wrote:
Now I am thinking that I should unplug the 2 x 2Tb disks and install the system on sda1 with a 1Gb /boot and lvm selected.
Note that if you do this the drive names can still change when you plug the other two disks back in. That's why you should use UUIDs and not worry about what names they are assigned.
Once I have a working system, then replug the SATA drives and set up RAID.
If you like.
In the past I used to be able to define a partition table but Ubuntu is doing my head in. It offered to use my 1Tb drive as a swap partition.
With Ubuntu 18.04 it should default to using a swapfile and no swap partition. If you're having trouble with the partitioning, the simplest solution is to tell the installer to use the entire 1TB drive in the default configuration that it recommends rather than doing it manually. Hope that helps, Andrew

HI Andrew, Since I am installing Ubuntu 18.04 LTS I have no mdadm to use in the install process, so I have pulled the leads on the 2 x 2Tb drives will install Ubuntu using LVM and their default setup for that disk and when I have a running system I can plug in the two drives and run mdadm to set up the raid formatting the disks as part of the process? or formatting before the process? Maany thanks Andrew Greig On 20/05/18 12:02, Andrew Pam wrote:
On 20/05/18 11:39, Andrew Greig wrote:
Now I am thinking that I should unplug the 2 x 2Tb disks and install the system on sda1 with a 1Gb /boot and lvm selected. Note that if you do this the drive names can still change when you plug the other two disks back in. That's why you should use UUIDs and not worry about what names they are assigned.
Once I have a working system, then replug the SATA drives and set up RAID. If you like.
In the past I used to be able to define a partition table but Ubuntu is doing my head in. It offered to use my 1Tb drive as a swap partition. With Ubuntu 18.04 it should default to using a swapfile and no swap partition. If you're having trouble with the partitioning, the simplest solution is to tell the installer to use the entire 1TB drive in the default configuration that it recommends rather than doing it manually.
Hope that helps, Andrew

On 20/05/18 12:10, Andrew Greig wrote:
Since I am installing Ubuntu 18.04 LTS I have no mdadm to use in the install process,
Yes, you would have to use the "server" image to have mdadm available at install time. But since you're not installing the base system to the RAID drives, that doesn't matter. You can install mdadm after the install is finished.
so I have pulled the leads on the 2 x 2Tb drives
No real benefit to that, but if it makes things easier for you, sure.
will install Ubuntu using LVM and their default setup for that disk and when I have a running system I can plug in the two drives and run mdadm to set up the raid formatting the disks as part of the process?
Yes. Set up mdadm first, then LVM on top of the newly created RAID volume. Of course you can also use LVM on your 1TB disk as well - if you set that up during the initial install, that's fine. I usually use the command line tools, but I believe the graphical disk management tools can handle RAID and LVM these days. Hope that helps, Andrew

Hi Andrew It does help. More in an hour or so. Andrew On Sun, 20 May 2018, 12:15 pm Andrew Pam, <andrew@sericyb.com.au> wrote:
On 20/05/18 12:10, Andrew Greig wrote:
Since I am installing Ubuntu 18.04 LTS I have no mdadm to use in the install process,
Yes, you would have to use the "server" image to have mdadm available at install time. But since you're not installing the base system to the RAID drives, that doesn't matter. You can install mdadm after the install is finished.
so I have pulled the leads on the 2 x 2Tb drives
No real benefit to that, but if it makes things easier for you, sure.
will install Ubuntu using LVM and their default setup for that disk and when I have a running system I can plug in the two drives and run mdadm to set up the raid formatting the disks as part of the process?
Yes. Set up mdadm first, then LVM on top of the newly created RAID volume. Of course you can also use LVM on your 1TB disk as well - if you set that up during the initial install, that's fine. I usually use the command line tools, but I believe the graphical disk management tools can handle RAID and LVM these days.
Hope that helps, Andrew

Quoting Andrew Pam (andrew@sericyb.com.au):
Yes. Set up mdadm first, then LVM on top of the newly created RAID volume.
_Or_, setting up mdadm for Linux software RAID, but then declining to also set up LVM volume management, would be a second choice. LVM adds significant complexity to a system in the form of an abstraction layer for addressing covered mass storage devices -- which additional complexity is good if you appreciate its advantages more than you dis-appreciate the (considerable) additional system complexity.

On 20/05/18 12:29, Rick Moen via luv-main wrote:
_Or_, setting up mdadm for Linux software RAID, but then declining to also set up LVM volume management, would be a second choice.
LVM adds significant complexity to a system in the form of an abstraction layer for addressing covered mass storage devices -- which additional complexity is good if you appreciate its advantages more than you dis-appreciate the (considerable) additional system complexity.
Yes, don't set up LVM unless you actually need it. I used to use it on my desktop workstation, but I haven't bothered for quite a few years now as it didn't really add much value. Cheers, Andrew

On 20/05/2018 11:40 AM, Andrew Greig wrote:
Ubuntu 18.04
The alternative installer has support for mdraid https://www.ubuntu.com/download/alternative-downloads#alternate-ubuntu-serve... Mike

Thanks Mike I didnt choose it at the tome as it sounded more serious. I expected it to be headless. I have a working system now. And later will add the pair of disks. CheersAndrew Sent from my SAMSUNG Galaxy S7 on the Telstra Mobile Network -------- Original message --------From: Mike O'Connor <mike@pineview.net> Date: 20/5/18 13:46 (GMT+10:00) To: Andrew Greig <pushin.linux@gmail.com>, Andrew Pam <andrew@sericyb.com.au>, luv-main@luv.asn.au Subject: Re: Biting the bullet - RAID On 20/05/2018 11:40 AM, Andrew Greig wrote:
Ubuntu 18.04
The alternative installer has support for mdraid https://www.ubuntu.com/download/alternative-downloads#alternate-ubuntu-serve... Mike

Quoting Andrew Pam (andrew@sericyb.com.au):
Note that if you do this the drive names can still change when you plug the other two disks back in.
Correction: Merely plugging in discs changes _no_ /dev/sdX device assignments. Changing what's plugged in at boot time often does.
That's why you should use UUIDs and not worry about what names they are assigned.
_Or_ don't change what mass storage devices are plugged in at boot time, and be prepared to occasionally update /etc/fstab if major kernel upgrades change the enumeration order. (Some us consider the cure of UUIDs to be in a spirited competition with the disease.)
With Ubuntu 18.04 it should default to using a swapfile and no swap partition. If you're having trouble with the partitioning, the simplest solution is to tell the installer to use the entire 1TB drive in the default configuration that it recommends rather than doing it manually.
Personally, I always do partitioning and initial mkfs operations using whatever live-CD distribution I most have confidence in (currently Siduction), and then separately let the distro installer use the filesystems and disc layout thus created. But horses for courses.

On 20/05/18 12:25, Rick Moen via luv-main wrote:
_Or_ don't change what mass storage devices are plugged in at boot time, and be prepared to occasionally update /etc/fstab if major kernel upgrades change the enumeration order.
(Some us consider the cure of UUIDs to be in a spirited competition with the disease.)
You really do want to use UUIDs for the RAID members, though. You want to make sure the drives get assembled into the array correctly regardless of how they're connected or enumerated. Cheers, Andrew

Quoting Andrew Pam (andrew@sericyb.com.au):
You really do want to use UUIDs for the RAID members, though. You want to make sure the drives get assembled into the array correctly regardless of how they're connected or enumerated.
mdadm and the md driver don't rely on /dev/sdX assignments after assembly of an array. You can confirm this by looking inside /etc/mdadm/mdadm.conf . -- Cheers, Rick Moen ROMANI, ITE DOMVM! rick@linuxmafia.com McQ! (4x80)

Thanks Rick, I travelled so long without problems that technology has oustripped my understanding. It is 18 years on Linux for me and around 17.5 since you informed me (graciously) about the bad habit of cross posting. So nice to hear from you. Andrew Greig On Sun, 20 May 2018, 12:37 pm Rick Moen via luv-main, <luv-main@luv.asn.au> wrote:
Quoting Andrew Pam (andrew@sericyb.com.au):
You really do want to use UUIDs for the RAID members, though. You want to make sure the drives get assembled into the array correctly regardless of how they're connected or enumerated.
mdadm and the md driver don't rely on /dev/sdX assignments after assembly of an array. You can confirm this by looking inside /etc/mdadm/mdadm.conf .
-- Cheers, Rick Moen ROMANI, ITE DOMVM! rick@linuxmafia.com McQ! (4x80) _______________________________________________ luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

Quoting Andrew Greig (pushin.linux@gmail.com):
Thanks Rick, I travelled so long without problems that technology has oustripped my understanding. It is 18 years on Linux for me and around 17.5 since you informed me (graciously) about the bad habit of cross posting. So nice to hear from you.
Delighted to hear from you, too, Andrew. I should have added that, at least on all Linux system implementations I've seen for the past couple of decades, the mdadm.conf file ends up being slightly redundant during normal operation, because array construction stores all required RAID metadata for the RAID array in the RAID superblock stored on each physical device in the array. ISTR that mdadm.conf can be fully reconsructed from that stored metadata, even. That's talked about briefly, here: https://raid.wiki.kernel.org/index.php/RAID_setup#The_Persistent_Superblock_... The larger point I'm making is that md RAID has become, over a long period of time, pretty bulletproof, speaking in general terms. (Of course, nothing is entirely safe from an absent-minded sysadmin. ;-> )

Quoting George Georgakis (luv-sub@tripleg.net.au):
On 20/05/2018 12:54 PM, Rick Moen via luv-main wrote:
ISTR that mdadm.conf can be fully reconsructed from that stored metadata, even.
mdadm --detail --scan > /etc/mdadm.conf ?
Looks right! -- Rick Moen "If accuracy / Is what you crave / rick@linuxmafia.com Then you should call it / Myanmar Shave." McQ! (4x80) -- @FakeAPStylebook

On Sat, May 19, 2018 at 07:25:23PM -0700, Rick Moen wrote:
Personally, I always do partitioning and initial mkfs operations using whatever live-CD distribution I most have confidence in (currently Siduction), and then separately let the distro installer use the filesystems and disc layout thus created. But horses for courses.
The debian installer (and presumably ubuntu and others) let you switch to another console tty with Alt-F2, Alt-F3 etc to get a root shell. You can manually create the partitions you want, then switch back to tty1 to install on the partitions you just created. IIRC, on debian tty1 is the installer menu, tty2 & tty3 are for shells, and tty4 is a log tail of info and error messages etc printed by the installer. craig -- craig sanders <cas@taz.net.au>

Quoting Craig Sanders (cas@taz.net.au):
The debian installer (and presumably ubuntu and others) let you switch to another console tty with Alt-F2, Alt-F3 etc to get a root shell. You can manually create the partitions you want, then switch back to tty1 to install on the partitions you just created.
IIRC, on debian tty1 is the installer menu, tty2 & tty3 are for shells, and tty4 is a log tail of info and error messages etc printed by the installer.
And very handy all the other virtual consoles are, too. (1994 thanks you for that tip, Craig. ;-> ) Still, I continue to prefer to use a best-of-breed live-CD disk with a very recent kernel (maximal hardware support) and highly reliable and diverse command-line tools for utility purposes such as partitioning and initial mkfs -- a superior environment for that purpose, IMO, than distro installers, even ones I like, like Debian's, are ever likely to furnish. I therefore also recommend that approach to others.

On Sat, May 19, 2018 at 10:38:48PM -0700, Rick Moen wrote:
And very handy all the other virtual consoles are, too. (1994 thanks you for that tip, Craig. ;-> )
i think that these days some people don't even realise that their linux box has multiple vts, what with booting directly into fancy graphical display managers and other luxury stuff :)
Still, I continue to prefer to use a best-of-breed live-CD disk with a very recent kernel (maximal hardware support) and highly reliable and diverse command-line tools for utility purposes such as partitioning and initial mkfs -- a superior environment for that purpose, IMO, than distro installers, even ones I like, like Debian's, are ever likely to furnish. I therefore also recommend that approach to others.
Yep. IMO either clonezilla or gparted make nice partitioning and/or rescue disks. or, as you say, a good recent live CD. Useful if you need to do anything beyond what the command line tools on the installer can do. But for basic partitioning without doing anything "unusual" like zfs, fdisk or gdisk in a root console works well enough. And it avoids having to reboot again just to partition the disks. Linux itself boots quickly, it's the BIOS that takes ages. craig -- craig sanders <cas@taz.net.au>

On Sunday, 20 May 2018 12:25:23 PM AEST Rick Moen via luv-main wrote:
Note that if you do this the drive names can still change when you plug the other two disks back in.
Correction: Merely plugging in discs changes _no_ /dev/sdX device assignments. Changing what's plugged in at boot time often does.
One common case nowadays is leaving USB devices plugged in at boot. One of my clients uses USB-SD and USB-CF devices to image storage for embedded systems. On one of their build servers I wrote a script to parse lsscsi output to prevent them from writing an embedded system image to one of the build server's hard drives. It was easier than trying to train them to not have various devices plugged in at boot time. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Sun, May 20, 2018 at 12:02:57PM +1000, Andrew Pam wrote:
On 20/05/18 11:39, Andrew Greig wrote:
Now I am thinking that I should unplug the 2 x 2Tb disks and install the system on sda1 with a 1Gb /boot and lvm selected.
Note that if you do this the drive names can still change when you plug the other two disks back in. That's why you should use UUIDs and not worry about what names they are assigned.
Worth noting here: if you plug the extra drives in and your system no longer boots, you need to go into the BIOS and tell it which drive to boot from. Once you've successfully booted, you can run 'grub-install' to install the boot loader into the MBR of all drives so it doesn't matter which drive the BIOS tries to boot from. craig -- craig sanders <cas@taz.net.au>

On 20/05/18 10:05, Andrew Greig wrote:
Drives are in and Ubuntu 18.04 is installing, I am offered an LVM option will that mess with RAID?
No. Typically you would use both, LVM on top of mdadm. (LVM does offer its own mirroring, but it's really just meant for cloning volumes.)
One other thing, will choosing btrfs orZFS just utilise the matched drives as a RAID pair?
Not unless you configure it that way.
If I go XFS, as my file system for all drives, then will I need the install to complete, then run mdadm to set the RAID?
If you want to set up RAID and LVM at install time, you need to use the Ubuntu "server" install image. If you're using the "desktop" image, you will have to set up RAID and LVM after the initial install. Since you're planning to put the base system on a single non-RAID drive (that's also what I did, using an SDD) you can easily just install to the single drive first, then add the mdadm and LVM packages and set the extra drives up the way you want. Hope that helps, Andrew

On Sunday, 20 May 2018 10:05:55 AM AEST Andrew Greig via luv-main wrote:
Drives are in and Ubuntu 18.04 is installing, I am offered an LVM option will that mess with RAID?
If you use LVM then you would use it on top of the Linux software RAID.
One other thing, will choosing btrfs orZFS just utilise the matched drives as a RAID pair?
That would depend on the installer options. With BTRFS the easiest thing to do is to make it a single disk filesystem at installation time (I recall that the Debian installer used to not support a BTRFS RAID-1 at installl time) and then convert it afterwards, see btrfs-balance(8) for details. One major feature of BTRFS is the ability to change RAID setup etc at run-time.
If I go XFS, as my file system for all drives, then will I need the install to complete, then run mdadm to set the RAID?
If you choose XFS for everything then you would setup mdadm in the installer, then maybe LVM, then XFS. On Sunday, 20 May 2018 12:21:56 AM AEST Andrew Pam via luv-main wrote:
Is mdadm a part of this process or does it get involved later ? ZFS for the pair of drives?
If you use ZFS or btrfs, you probably don't want to use mdadm (software RAID) as well. Better to take advantage of the filesystem-level mirroring. If you use another filesystem (such as XFS, which is what I use on my 4T RAID-1 mirror on my desktop workstation) you will need to install and set up mdadm first, yes.
BTRFS and ZFS store hashes of all data and metadata which are checked at read time to prevent corruption. If you use the RAID built-in to those filesystems then they will detect and correct errors. If you use Linux software RAID (or any other RAID including hardware RAID) under ZFS or BTRFS then they can only detect errors not correct them. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Sunday, 20 May 2018 12:04:13 AM AEST Andrew Greig via luv-main wrote:
So the time has come when I have backed up all my data, cleaned out the /home directory, and in the morning I should expect that all of my data in Dropbox has finished synching. i have downloaded and tested the Ubuntu 18.04 LTS and burned it to DVD.
In the morning I will install the 2 new 2Tb HDDs , and load the DVD to launch myself into unfamiliar territory, so when I get to the partition stage of the process I will have 1 x 1Tb HDD for the system and /home and the 2 x 2Tb drives for the RAID.
What should I do about partitioning? Should I use hard partitions say, 20Gb for the /root, should I use a /boot partition or just use the MBR? Is mdadm a part of this process or does it get involved later ? ZFS for the pair of drives?
Ubuntu has the best ZFS support of all the Linux distributions I'm aware of. I use ZFS for systems that need RAID-5 or RAID-6 as the BTRFS support for those RAID levels wasn't complete last time I checked. Otherwise I use BTRFS for most things. I recommend using ZFS or BTRFS and use ZFS filesystems or BTRFS subvolumes instead of having separate partitions or LVM LVs for different uses. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/
participants (9)
-
Andrew Greig
-
Andrew McGlashan
-
Andrew Pam
-
Craig Sanders
-
George Georgakis
-
Mike O'Connor
-
pushin.linux
-
Rick Moen
-
Russell Coker