CentOS 5 change from IDE to RAID after cloning

Hello all, I am hoping someone has done this before as I am unable to locate anything on the web to help me. I have a drive that has been cloned to a RAID controller but when it boots it only gets to to "Kernel Panic". The only reason I can see is the RAID driver is not loading. Can someone tell me how/where I can make this change without having to do a full reinstall please? TIA -- best wishes Tony White

Quoting Tony White (tony@ycs.com.au):
Hello all, I am hoping someone has done this before as I am unable to locate anything on the web to help me. I have a drive that has been cloned to a RAID controller but when it boots it only gets to to "Kernel Panic". The only reason I can see is the RAID driver is not loading. Can someone tell me how/where I can make this change without having to do a full reinstall please?
Not a complaint, but you aren't giving people on the mailing list much to work with. E.g., you don't detail what 'the RAID driver is not loading' means. What is it that you see happening? Moreover, what driver? What does 'cloned to a RAID controller' mean? What is 'this change'? Here's a suggestion: Download and burn to a CD some current, recent release of a live-CD distro with very good hardware support and ability to autoprobe hardware. I personally am fond of Aptosid for such purposes. Boot it up. Assuming it doesn't kernel panic and fail to boot, have a (selective[1]) look at the details of 'dmesg | less' to observe the kernel's hardware detection and loading of driver modules during bootup, do 'lsmod' to see what kernel modules are in the running kernel after startup, and do 'lspci' to see what chipsets are present. Perhaps some clues will stand out concerning your mass-storage subsystem. You can also (carefully) use tools like fdisk or parted to examine your filesystem partitioning. You didn't really address what if anything has changed on this system, lately (that you're aware of). (It's possible your wording about a drive having been 'cloned to a RAID controller' refers to a recent change. If you're aware of something that's been changed immediately before the pattern of kernel panics upon startup, that would be a logical place to look. If you're not aware of anything having changed immediately before the problem arose, be aware that there still might have been relevant hardware changes in the form of either failing hardware (failing hard drive or failing HBA) or loose cabling or something like that.

Hi Rick, No issue, I was only looking for a direction to work on rather than someone give me a direct detailed approach. I was hoping there was a document somewhere that may relate to this problem that I could read. I did not detail the items involved because I had suspected it would a general issue where if anyone had upgraded from an IDE to a RAID they might have found a cure and pointed me to the page/doc. So more details... 0. Adaptec ASR-2405 ROHS RAID controller also tried Adaptec 1210A and 2420A 1. 250 GB Seagate original Drive SATA 2. Idenitcal pair of WD Black 500GB ES SATA drives (Both new) 3. Cloned using Terabyte for Linux from Terabyte Unlimited in the USA. 4. Resulting text on screen before "Kernel panic" Waiting for driver initialisation. Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... No volume groups found Activating logical volumes Volume group :VolGroup00" not found Trying to resume device (/dev/VolGroup00/LogVol01) Creating root device. Mounting root filesystem. mount : could not find filesystem 'dev/root' Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed:No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: no such file or directory setuproot: error mounting /sys: no such file or directory Switching to new root and running init. unmounting old /dev/ umounting old /proc umounting old /sys switchroot : mount failed: no such file or directory Kernel panic - not syncing: Attempted to kill init! Not trying to be facetious, but is this enough? If not, what else would you like? Thanks again. best wishes Tony White On 26/10/2013 15:54, Rick Moen wrote:
Quoting Tony White (tony@ycs.com.au):
Hello all, I am hoping someone has done this before as I am unable to locate anything on the web to help me. I have a drive that has been cloned to a RAID controller but when it boots it only gets to to "Kernel Panic". The only reason I can see is the RAID driver is not loading. Can someone tell me how/where I can make this change without having to do a full reinstall please? Not a complaint, but you aren't giving people on the mailing list much to work with. E.g., you don't detail what 'the RAID driver is not loading' means. What is it that you see happening? Moreover, what driver? What does 'cloned to a RAID controller' mean? What is 'this change'?
Here's a suggestion: Download and burn to a CD some current, recent release of a live-CD distro with very good hardware support and ability to autoprobe hardware. I personally am fond of Aptosid for such purposes. Boot it up. Assuming it doesn't kernel panic and fail to boot, have a (selective[1]) look at the details of 'dmesg | less' to observe the kernel's hardware detection and loading of driver modules during bootup, do 'lsmod' to see what kernel modules are in the running kernel after startup, and do 'lspci' to see what chipsets are present.
Perhaps some clues will stand out concerning your mass-storage subsystem. You can also (carefully) use tools like fdisk or parted to examine your filesystem partitioning.
You didn't really address what if anything has changed on this system, lately (that you're aware of). (It's possible your wording about a drive having been 'cloned to a RAID controller' refers to a recent change. If you're aware of something that's been changed immediately before the pattern of kernel panics upon startup, that would be a logical place to look.
If you're not aware of anything having changed immediately before the problem arose, be aware that there still might have been relevant hardware changes in the form of either failing hardware (failing hard drive or failing HBA) or loose cabling or something like that.
_______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main

Hi Luv' ers I am having a grub install problem. I installed 13.10 and in the process destroyed grub and I can't boot anymore. I can however boot using supergrub2 cd. (..so no panic!) If I boot from hard drive, I get error: dev with UUID xxxx... can't be found. (..something to that effect) I know that, because UUID xxxx... is now UUID yyyy .... I changed all reference to xxxx... to yyyy... in all grub.cfg files. (this box has 3 versions of Ubuntu) When I try to reinstall grub with grub-install /dev/sdb (note: boot HD is second one, but even if I select sda, I get same error) I get errors as follows: /usr/sbin/grub-bios-setup: warning: your core.img is unusually large. It won't fit in the embedding area. /usr/sbin/grub-bios-setup: error: embedding is not possible, but this is required for RAID and LVM install. Even if I try force option (--f), I still get above error. (Yes, this box has LVM) Any ideas? Cheers, Daniel.

On 2013-10-26 20:45, Daniel Jitnah wrote: [...]
If I boot from hard drive, I get error: dev with UUID xxxx... can't be found. (..something to that effect)
I know that, because UUID xxxx... is now UUID yyyy ....
I changed all reference to xxxx... to yyyy... in all grub.cfg files. (this box has 3 versions of Ubuntu)
Did you check other files in /boot? IIRC GRUB uses (used?) a device map file to map UUIDs or other identifiers into GRUB-specific names (which historically looked like (hdX), but may have changed with GRUB2. I gave up on GRUB a long time ago, and strongly recommend Extlinux. Except that Extlinux doesn't support /boot on LVM (it supports Ext*, FAT, iso9660, btrfs, perhaps one or two others), so my solution is: Don't do that; I always use a separate /boot. If you're interested I can provide detailed instructions on setting up Extlinux, otherwise, I can't be of anymore help. -- Regards, Matthew Cengia

Matthew Cengia <mattcen@gmail.com> writes:
I gave up on GRUB a long time ago, and strongly recommend Extlinux. Except that Extlinux doesn't support /boot on LVM (it supports Ext*, FAT, iso9660, btrfs, perhaps one or two others), so my solution is: Don't do that; I always use a separate /boot.
IMO it's a bad idea to add unnecessary complications to /boot regardless of what bootloader you're using -- because when it inevitably breaks, the jigsaw puzzle is easier.

On 26 October 2013 20:45, Daniel Jitnah <djitnah@greenwareit.com.au> wrote:
If I boot from hard drive, I get error: dev with UUID xxxx... can't be found. (..something to that effect)
I know that, because UUID xxxx... is now UUID yyyy ....
I changed all reference to xxxx... to yyyy... in all grub.cfg files. (this box has 3 versions of Ubuntu)
In ubuntu I believe the correct way to do this is to edit /etc/default/grub and then run sudo update-grub, which rebuilds the grub.cfg file. If you haven't done that already, maybe give it a go?

On Sat, Oct 26, 2013 at 05:21:13PM +1100, Tony White wrote:
I did not detail the items involved because I had suspected it would a general issue where if anyone had upgraded from an IDE to a RAID they might have found a cure and pointed me to the page/doc.
it's not an ide issue or a raid issue, it's a driver issue. specifically, the aacraid apdatec raid controller driver.
So more details...
0. Adaptec ASR-2405 ROHS RAID controller also tried Adaptec 1210A and 2420A
if you used this card as a hardware raid controller (rather than as JBOD ports) then you will need the aacraid driver loaded before linux will recognise it. this driver needs to be on the initramfs, and the initramfs needs to know that it should load the driver before trying to mount the root filesystem. you'll need to boot a rescue cd or similar - if you still have it, your original IDE drive would make an ideal rescue system as it has the exact same version of the kernel as on the adaptec raid drives. what kernel version are you running, and what distro? re-reading the Subject line, it appears you're using centos 5. kernel version? anyway, you'll need to boot your rescue cd / disk, load the aacraid module, find and mount the root (and /boot if you have one) filesystems, bind-mount /proc, /dev, and /sys, chroot into them, use a distro-specific method to force the initramfs to load the aacraid driver (e.g. on debian, the simplest way is to add 'aacraid' to /etc/modules and run 'update-initramfs -u -k all'), exit the chroot, unmount the filesystems, and reboot. for many reasons (including avoidance of proprietary lock-in - you're now stuck with adaptec controllers) you probably would have been better off using linux's built-in software raid, mdadm - the only time hardware raid is better is when you're using raid-5 or raid-6 and the card provides non-volatile write cache (so that raid5/6 write performance doesn't suck). and even then you're better off with zfs than raid-5. since the card only supports raid1 and raid10, that's not a factor. according to http://www.adaptec.com/en-us/products/series/2/ the card has some sort of hybrid ssd+hdd raid mode, but i can't see any details on what that actually means - in any case, zfs does it better, and without proprietary lock-in (i.e. with any drives on any controllers). amongst other benefits, converting from a single old IDE drive to mdadm raid would have "just worked" because it wouldn't have required any special drivers.
1. 250 GB Seagate original Drive SATA
huh? i thought you said it was IDE?
2. Idenitcal pair of WD Black 500GB ES SATA drives (Both new)
for mdadm software raid, you could have just plugged these into the motherboard's SATA ports and avoided the expense and hassle of the adaptec card. if all else fails, you can always fall back to that.
3. Cloned using Terabyte for Linux from Terabyte Unlimited in the USA.
third-party tools like this aren't needed on linux - rsync will do the job, just as well, and for free.
4. Resulting text on screen before "Kernel panic"
Waiting for driver initialisation. Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... No volume groups found Activating logical volumes Volume group :VolGroup00" not found Trying to resume device (/dev/VolGroup00/LogVol01) Creating root device. Mounting root filesystem. mount : could not find filesystem 'dev/root'
this almost certainly confirms that the aacraid driver was not loaded, so the linux kernel can't see the raid array containing the root filesystem and therefore can't find the rootfs. craig -- craig sanders <cas@taz.net.au>

Quoting Tony White (tony@ycs.com.au):
Hi Rick, No issue, I was only looking for a direction to work on rather than someone give me a direct detailed approach. I was hoping there was a document somewhere that may relate to this problem that I could read. I did not detail the items involved because I had suspected it would a general issue where if anyone had upgraded from an IDE to a RAID they might have found a cure and pointed me to the page/doc.
So more details...
0. Adaptec ASR-2405 ROHS RAID controller also tried Adaptec 1210A and 2420A
I second Craig's comments (for which, much thanks, Craig). To elaborate, inexpensive RAID HBAs tend to be what are generically called 'fakeraid' devices, which are essentially proprietary software RAID with a motherboard BIOS assist for configuration and to support booting. Athough each separate manufacturer implementaton is different (and thus non-portable, etc.) and I referred to it as 'proprietary' formatting, most (including Adaptec's 'HostRAID' series) are documented well enough that Red Hat employee Heinz Mauelshagen's dmraid (Device Mapper RAID) tool can create/remove and manage them: http://people.redhat.com/~heinzm/sw/dmraid/readme Manufacturer-specific fakeraid's performance and reliability lags compared to what you get using native Linux 'md' software RAID, so I'd personally wipe and start over. The only exception is where you need to dual-boot with MS-Windows and both Linux and MS-Windows must both be able to able to mount the RAID array natively. (In general, though, dual boot sucks compared to concurrent OS usage with the help of your choice of virtual machine software, so I urge the latter over dual boot except in outlier exceptional cases.) Back in the day, when SATA was brand new and there was almost no Linux documentation about Linux drivers for SATA chipsets, I created this page that had a great deal to say about particular HBAs being fakeraid: http://linuxmafia.com/faq/Hardware/sata.html As noted at the top, said page is now about eight years out of date, but many general observations on it still apply.
participants (7)
-
Craig Sanders
-
Daniel Jitnah
-
Matthew Cengia
-
Rick Moen
-
Toby Corkindale
-
Tony White
-
trentbuck@gmail.com