On Wed, Oct 12, 2016 at 11:18:40AM +1100, russell(a)coker.com.au wrote:
On Wednesday, 12 October 2016 1:31:33 AM AEDT Craig
Sanders via luv-main
wrote:
the only time i've ever seen something
similar was my own stupid
fault, i rebooted and just pulled out the old SSD forgetting that I
had ZIL and L2ARC for the pools on that SSD. I had to plug the old
SSD back in before I could import the pool, so i could remove them
from the pool (and add partitions from my shiny new SSDs to replace
them).
Did you have to run "zfs import" on it or was it recognised
automatically? If the former how did you do it?
after plugging the old SSD back in? can't remember for sure, but i think
so....it wasn't imported before i rebooted again so wouldn't have been
automatically imported after reboot.
I probably did something like:
zpool import -d /dev/disk/by-id/ <poolname>
Is the initramfs configured to be able to run zfs
import?
yes, i have zfs-initramfs installed.
BTRFS snapshots are working well on the root
filesystems of many
systems I run. The only systems I run without BTRFS as root are
systems where getting console access in the event of problems is too
difficult.
yes, but you can't pipe `btrfs send` to `zfs recv` and expect to get
anything useful. my backup pool is zfs.
and so far, i've had 100% success rate (2/2) with zfs rootfs.
Disclaimer: not a statistically significant sample size. contents
may settle during transport. void where prohibited by law. serving
suggestion only. batteries not included.
crucial mx300
275G SSDs(*). slightly more expensive than a pair of
500-ish GB but much better performance....read speeds roughly 4 x
SATA SSD read (approximating pci-e SSD speeds), write speeds about 2
x SATA SSD.
i haven't run bonnie++ on it yet. it's on my todo list.
If you had 2*NVMe devices it would probably give better performance
than 4*SATA and might be cheaper. That would also leave more SATA
slots free.
yes, that would certainly be a LOT faster. can't see any way it could
be cheaper. i'd have to get a more expensive brand of ssd plus i'd
need an nvme pci-e card or two.
However, I have SATA ports in abundance. On the motherboard, I have 6 x
SATA III (4 used for the new SSDs, two previously used for the old SSDs
but now spare) plus another 2 x 1.5Gbs SATA, and some e-sata which i've
never used. In PCI-e slots, I have 16 x SAS/SATA3 on two IBM 1015 LSI
cards (8 ports in use, 4 spare and connected to hot-swap bays, 4 spare
and unconnected).
PCI-e slots are in very short supply. and my m/b doesn't have any nvme
sockets.
If I could find a reasonably priced PCI-e 8x NVMe card that actuAlly
supported two PCI-e NVMe drives (instead of 1 x pci-e nvme + 1 x sata
nvme), i'd probably have swapped out the spare/unused M1015 cards for
it. i don't have any spare 4x slots.
so i did what I could to maximise performance with the hardware I have.
everything I do on the machine is noticably faster, including compiles
and docker builds etc.
but yeah, eventually I'll move to PCI-e NVME drives. sometime after my
next motherboard & cpu upgrade.
I'm waiting to see real-world reviews and benchmarks on the upcoming AMD
Zen CPU.
Intel has some very nice (and expensive) high-end CPUs, but their
low-end and mid-range CPUs are more expensive than old AMD CPUs without
offering much improvement....might make sense for a new system, but not
as an upgrade. Every time I look into switching to Intel, it turns out
I'll have to spend around $1000 to get roughly similar performance to
what I have now with a 6 year old AMD CPU. I'm not going to spend that
kind of money without a really significant benefit.
I could get an AMD FX-8320 or FX-8350 CPU for under $250 but I'd rather
wait for Zen and get a new motherboard with PCI-e 3.0 and other new
stuff too. Just going on past history, I'm quite confident that will
be significantly cheaper and better than switching to Intel...i expect
around $400-$500 rather than $800-$1000.
we're just
on the leading edge of some massive drops in price/GB.
a bit earlier than I was predicting, i though we'd start seeing it
next year. wont be long before 2 or 4TB SSDs are affordable for home
users (you can get 2TB SSDs for around $800 now). and then I can
replace some of my HDD pools.
It's really changing things. For most users 2TB is more than enough
storage even for torrenting movies.
btw, for torrenting on ZFS, you need to create a separate dataset with
recordsize=16K (instead of the default 128K) to avoid COW fragmentation.
configure deluge or whatever to download to that and then move the
finished torrent to another filesystem.
probably same or similar for btrfs.
I think that spinning media is going to be mostly
obsolete for home
use soon.
yep. and good riddance.
i'd still want to buy them in pairs, for RAID-1/RAID-10 (actually, ZFS
mirrored pairs)
i'll read through that (and the fedora one that it links to) before
rebooting my myth box with systemd again.
it was an unintentional reboot anyway. i'd used grub-set-default
intending to reboot to systemd next time, but the thunderstorm caused
a few second power-outage and the UPS for that machine died ages ago
(haven't replaced it yet). was busy with other stuff and didn't even
notice it was down for a few hours.
From a quick read of the man page it appears that the
-D option to
journalctl might do what we want. It appears that Debian has moved to
not having the binary journals so I don't have a conveniant source of
test data.
looks like Storage=auto in /etc/systemd/journald.conf, but
/var/log/journal isn't created by default. so no persistent journal by
default. bad default.
debian configuring it to use rsyslogd by default is a good thing, but
doesn't help when the system won't boot far enough to get rsyslogd
running. should be on by default. maybe even automatically turn off
journald's persistence as soon as rsyslogd (or whatever external logger)
successfully starts up.
craig
--
craig sanders <cas(a)taz.net.au>