Re: Biting the bullet - RAID

On Sunday, 20 May 2018 2:01:14 PM AEST Craig Sanders via luv-main wrote:
In the morning I will install the 2 new 2Tb HDDs , and load the DVD to launch myself into unfamiliar territory, so when I get to the partition stage of the process I will have 1 x 1Tb HDD for the system and /home and the 2 x 2Tb drives for the RAID.
Is there any reason why you want your OS on a single separate drive with no RAID?
Some people think that it's only worth using RAID for things that you can't lose. But RAID also offers convenience. If your system with RAID has one disk die you would probably like it to keep running while you go to the shop to buy a new disk.
If I were you, I'd either get rid of the 1TB drive (or use it as extra storage space for unimportant files) or replace it with a third 2TB drive for a three-way mirror - or perhaps RAID-5 (mdadm) or RAID-Z1 (zfs) if storage capacity is more important than speed.
I expect that if he's just starting out with RAID then he doesn't even have 2TB of data to store.
One thing I very strongly recommend is that you get some practice with mdadm or LVM or ZFS before you do anything to your system. If you have KVM or Virtualbox installed, this is easy. If not, install & configure libvirt + KVM and it will be easy. BTW, virt-manager is a nice GUI front-end for KVM.
https://etbe.coker.com.au/2015/08/18/btrfs-training/ A few years ago I ran a LUV training session on BTRFS and ZFS which included deliberately corrupting disks to be prepared for real life corruption. I think this is worth doing. Everyone knows that backups aren't much good unless they are tested and the same applies to data integrity features of filesystems.
A /boot filesystem isn't really necessary these days, but I like to have one. It gives me a standard, common filesystem type (ext4) to put ISOs (e.g. a rescue disk or gparted or clonezilla) that can be booted directly from grub with memdisk.
If you want to have a Linux software RAID-1 for the root filesystem then a separate filesystem for /boot doesn't give much benefit. If you want to use BTRFS or ZFS for root then you want a separate /boot. You can have /boot on BTRFS but that seems likely to give you more pain than you want for no apparent benefit. On Sunday, 20 May 2018 10:27:53 AM AEST Mike O'Connor via luv-main wrote:
I suggest the following. 1. Do not use ZFS unless you have ECC ram
If you use a filesystem like BTRFS or ZFS and have memory errors then it will become apparent fairly quickly so you can then restore from backups. If you use a filesystem like Ext4 and have memory errors then you can have your data become increasingly corrupt for an extended period of time before you realise it.
2. btrfs has real issues in a number of area so unless you are very experienced I would not use it.
If you use the basic functionality and only RAID-1 (not RAID-5 or RAID-6) then it's pretty solid. I've been running my servers on BTRFS for years without problems.
So why do it this way ? Well LVMs give a lot of options which as not available if there not there. This site has only a very simple example but give it a read http://tldp.org/HOWTO/LVM-HOWTO/benefitsoflvmsmall.html
That document says "Joe buys a PC with an 8.4 Gigabyte disk". I just checked the MSY pricelist, the smallest disk they sell is 1TB and the smallest SSD they sell is 120G. Any document referencing 8G disks is well out of date. https://www.tldp.org/HOWTO/Large-Disk-HOWTO-4.html The above document explains the 8.4G limit. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Hi, On 21/05/18 00:59, Russell Coker via luv-main wrote:
On Sunday, 20 May 2018 10:27:53 AM AEST Mike O'Connor via luv-main wrote:
I suggest the following.
1. Do not use ZFS unless you have ECC ram
If you use a filesystem like BTRFS or ZFS and have memory errors then it will become apparent fairly quickly so you can then restore from backups. If you use a filesystem like Ext4 and have memory errors then you can have your data become increasingly corrupt for an extended period of time before you realise it.
Perhaps a better question for a new thread, but it would be interesting to know whether or not the whole ECC only for ZFS is really a thing and if it gives anybody grief without ECC.
Well LVMs give a lot of options which as not available if there not there. This site has only a very simple example but give it a read http://tldp.org/HOWTO/LVM-HOWTO/benefitsoflvmsmall.html
That document says "Joe buys a PC with an 8.4 Gigabyte disk". I just checked the MSY pricelist, the smallest disk they sell is 1TB and the smallest SSD they sell is 120G. Any document referencing 8G disks is well out of date.
LOL. This is what I do, routinely, as a first preference: - I mirror every single disk (no RAID5 / RAID6 usually). - one partition for /boot on ext4 on the bootable disk set - the rest as one partition for a crypt volume - Encryption, using LUKS (crypt volumes) NB: the separate primary /boot partition should be of reasonable size, heck, the size of disks these days, it doesn't hurt to give it 2GB, knowing that is overkill of course (unless you store ISO files on it, then you may as well go larger), the rest of the boot disk is a LUKS volume that is a volume group for LVM, it is very, very easy to use, once you understand how it hangs together. For servers that I want to be headless, I boot to a dropbear ssh environment. Unlock the crypt (LUKS) volume which allows swap, root and other file systems on LVM to be available for next phase of boot; kill the process that sits on the console asking for the crypt password as we've already unlocked the LUKS volume within dropbear, boot continues when that process is killed; exit from the dropbear environment. For ordinary non server machines that have a display and keyboard easily available, such as a laptop, I don't use dropbear. Within dropbear I have aliases like these to help: # dropbear aliases alias db1='/sbin/cryptsetup luksOpen /dev/md0 md0_crypt' alias db2='/sbin/cryptsetup luksOpen /dev/md1 md1_crypt' alias db3='/sbin/cryptsetup luksOpen /dev/md2 md2_crypt' alias db4='/sbin/cryptsetup status md0_crypt' alias db5='/sbin/cryptsetup status md1_crypt' alias db6='/sbin/cryptsetup status md2_crypt' alias db7='ps|grep askpass;echo kill -9 $(pidof askpass)' alias db8='kill -9 $(pidof askpass)' # mdadm aliases alias mdadm='/sbin/mdadm' alias mdstat='cat /proc/mdstat' alias md0='mdadm -D /dev/md0' alias md1='mdadm -D /dev/md1' alias md2='mdadm -D /dev/md2' It can be a bit of a task to get all the right tools and custom environment (including extra files) available in my initrd file with initramfs.hooks.... see script below. With my setup, root is an LVM volume that lives on the encrypted volume that is an mdadm (RAID1) mirror. Aside from using btrfs or ZFS.... I cannot understand opposition to lvm, it's wonderful. With LVM you get snapshots, your volumes have names, it's very easy to find the /dev files and /dev/mapper files if you know the volume group and lv names, you can query UUID with 'blkid -c /dev/null', and definitely use UUIDs in /etc/fstab -- much simpler than mucking around with labels on partitions. I use ext4 on most of the logical volumes (lvs), excepting swap of course. If I need to increase the size of a file system, then I resize the lv and then use 'resize2fs' and it magically gets larger -- in this case, I can do so on-line as well (without unmounting the file system). I rarely shrink volumes, but it is possible, just be careful whenever you resize volumes, especially when shrinking to get your process 100% correct so you don't screw up the file system. Kind Regards AndrewM Helper script for extra progs [and environment setup] for dropbear environment use: # cat /usr/share/initramfs-tools/hooks/other #!/bin/sh PREREQ="" prereqs() { echo "$PREREQ" } case "$1" in prereqs) prereqs exit 0 ;; esac . "${CONFDIR}/initramfs.conf" . /usr/share/initramfs-tools/hook-functions # Install /bin/ binaries for binary in bash do copy_exec "/bin/$binary" "/bin/" done # Install /sbin/ binaries for binary in badblocks blkid fdisk hdparm parted do copy_exec "/sbin/$binary" "/sbin/" done # Install /usr/bin/ binaries for binary in last nohup pv screen tee vim who do copy_exec "/usr/bin/$binary" "/usr/bin/" done # Install other root files if [ -d /etc/initramfs-tools/root.other_files ]; then (cd /etc/initramfs-tools/root.other_files/;tar cf - . | (cd "${DESTDIR}/root/";tar xvf -)) fi

On Mon, May 21, 2018 at 04:04:18AM +1000, Andrew McGlashan wrote:
Perhaps a better question for a new thread, but it would be interesting to know whether or not the whole ECC only for ZFS is really a thing and if it gives anybody grief without ECC.
ECC RAM is recommended, but not essential. I've been running ZFS without ECC RAM on my home systems since at least 2010 without any RAM-related dramas. Occasional disk problems, yeah, but that's why I run ZFS - so I don't have to care too much if a drive dies - I can replace it without losing anything valuable (i.e. my data) For **HOME** use, If you can get ECC RAM at a decent price, at not too much of a premium over non-ECC RAM then get it. If not, then don't worry too much about it. For non-home use, probably best to go with ECC RAM. Note that non-ECC RAM is typically available in faster speeds than ECC. ECC seems to max at about 2400MHz at the moment. non-ECC goes to 3600 or even higher. This doesn't make much difference in actual real-world performance unless you're using a Ryzen or Threadripper CPU - they really benefit from faster RAM speeds. More accurately, they're performance is limited by slow RAM.
Aside from using btrfs or ZFS.... I cannot understand opposition to lvm, it's wonderful.
LVM is wonderful. ZFS is wonderfuller. btrfs would be too, if it could be trusted not to lose your data.
If I need to increase the size of a file system, then I resize the lv and then use 'resize2fs' and it magically gets larger -- in this case, I can do so on-line as well (without unmounting the file system). I rarely shrink volumes, but it is possible, just be careful whenever you resize volumes, especially when shrinking to get your process 100% correct so you don't screw up the file system.
You don't need to mess around with resizing anything with either zfs or btrfs. The pool is shared amongst all datasets on it. Just create datasets (zfs) or sub-volumes (btrfs) as you need them. IIRC, btrfs doesn't have any quota or reservation mechanism, but zfs does. You can set a quota to limit how much space a dataset can use (e.g. limit your videos to 1TB) and you can also reserve a minimum amount of space (e.g. so that no matter what else is using disk space, /var/log will have a guaranteed minimum of 50GB available to it). Quotas and reservations are "soft" - i.e. they can be changed up or down at any time with just a single command. you can also have a dataset mounted anywhere in the filesystem you like. e.g. zfs create -o mountpoint=/var/cache/apt/archives -o compression=none ganesh/apt That creates a dataset called "apt" on pool "ganesh" that will be auto-mounted at /var/cache/apt/archives. compression is disabled on it because there's no need to waste CPU cycles trying to recompress .deb files. You can change the mount point any time too with "zfs set -o mountpoint=...." craig -- craig sanders <cas@taz.net.au>

On Mon, May 21, 2018 at 12:59:11AM +1000, Russell Coker wrote:
Is there any reason why you want your OS on a single separate drive with no RAID?
Some people think that it's only worth using RAID for things that you can't lose. But RAID also offers convenience. If your system with RAID has one disk die you would probably like it to keep running while you go to the shop to buy a new disk.
yep, exactly. RAID isn't a substitute for backup, but having my OS partition on RAID (or ZFS) means I can recover from a disk failure without having to restore from backup. The system can limp along with the RAID/zpool in degraded mode until I get a replacment - then the replacement will just automatically sync when I install & configure it. also, /etc and /var and /usr/local qualifies as stuff I don't want to lose. craig -- craig sanders <cas@taz.net.au>
participants (3)
-
Andrew McGlashan
-
Craig Sanders
-
Russell Coker