On Sat, Oct 26, 2013 at 05:21:13PM +1100, Tony White wrote:
I did not detail the items involved because I had
suspected it would
a general issue where if anyone had upgraded from an IDE to a RAID
they might have found a cure and pointed me to the page/doc.
it's not an ide issue or a raid issue, it's a driver issue.
specifically, the aacraid apdatec raid controller driver.
So more details...
0. Adaptec ASR-2405 ROHS RAID controller
also tried Adaptec 1210A and 2420A
if you used this card as a hardware raid controller (rather than as JBOD
ports) then you will need the aacraid driver loaded before linux will
this driver needs to be on the initramfs, and the initramfs needs to know
that it should load the driver before trying to mount the root filesystem.
you'll need to boot a rescue cd or similar - if you still have it,
your original IDE drive would make an ideal rescue system as it has
the exact same version of the kernel as on the adaptec raid drives.
what kernel version are you running, and what distro?
re-reading the Subject line, it appears you're using centos 5. kernel
anyway, you'll need to boot your rescue cd / disk, load the aacraid
module, find and mount the root (and /boot if you have one)
filesystems, bind-mount /proc, /dev, and /sys, chroot into them, use a
distro-specific method to force the initramfs to load the aacraid driver
(e.g. on debian, the simplest way is to add 'aacraid' to /etc/modules
and run 'update-initramfs -u -k all'), exit the chroot, unmount the
filesystems, and reboot.
for many reasons (including avoidance of proprietary lock-in - you're
now stuck with adaptec controllers) you probably would have been better
off using linux's built-in software raid, mdadm - the only time hardware
raid is better is when you're using raid-5 or raid-6 and the card
provides non-volatile write cache (so that raid5/6 write performance
doesn't suck). and even then you're better off with zfs than raid-5.
since the card only supports raid1 and raid10, that's not a factor.
according to http://www.adaptec.com/en-us/products/series/2/
has some sort of hybrid ssd+hdd raid mode, but i can't see any details
on what that actually means - in any case, zfs does it better, and
without proprietary lock-in (i.e. with any drives on any controllers).
amongst other benefits, converting from a single old IDE drive to mdadm
raid would have "just worked" because it wouldn't have required any
1. 250 GB Seagate original Drive SATA
huh? i thought you said it was IDE?
2. Idenitcal pair of WD Black 500GB ES SATA drives
for mdadm software raid, you could have just plugged these into the
motherboard's SATA ports and avoided the expense and hassle of the
if all else fails, you can always fall back to that.
3. Cloned using Terabyte for Linux from Terabyte
Unlimited in the USA.
third-party tools like this aren't needed on linux - rsync will do the
job, just as well, and for free.
4. Resulting text on screen before "Kernel
Waiting for driver initialisation.
Scanning and configuring dmraid supported devices
Scanning logical volumes
Reading all physical volumes. This may take a while...
No volume groups found
Activating logical volumes
Volume group :VolGroup00" not found
Trying to resume device (/dev/VolGroup00/LogVol01)
Creating root device.
Mounting root filesystem.
mount : could not find filesystem 'dev/root'
this almost certainly confirms that the aacraid driver was not loaded,
so the linux kernel can't see the raid array containing the root
filesystem and therefore can't find the rootfs.
craig sanders <cas(a)taz.net.au>