[luv-main] LVM ate my volume group?

Hi, Got an odd situation here, where I can't find my volume group any more. Earlier tonight I moved my logical volumes from an old drive to a new drive Before, I had sdf1 - /boot sdf2 - pv - system volume group - contained root, swap, home To move things, I did a pvcreate on the new partition, sdg1 then vgextend system /dev/sdg1 then pvmove /dev/sdf2 /dev/sdg1 Sure enough, it moved all my lvs over. Was then able to 'vgreduce system /dev/sdf2' and 'pvremove /dev/sdf2' Lastly, moved /boot contents to be inside /, and unmounted /boot My next plan was to install grub2, chainloaded, since I'd need that to boot from inside an lvm. At this point, ubuntu didn't like it, and errored on configuring grub-pc, saying it couldn't detect the filesystem. Stupidly, I decided to reboot, at which point it loaded grub (1), booted the initrd/kernel, and then couldn't find root. Inside a rescue environment, sure enough, if I do 'pvs', I can see my pv, but it doesn't have a VG. Any suggestions on how to rectify this? Why doesn't it know it's part of the 'system' volume group? I presume once it remembers this, I should be able to boot again. cheers, / Brett

Upon some sleep, I'm guessing the issue is that the instructions I followed either assume you aren't removing the primary PV or don't care. I guess the lvm metadata is stored on there, which contains VG and LV information, and upon doing 'vgextend system /dev/newpv' it doesn't mirror that information to the new pv. So by removing the old pv, I've shot myself in the foot. I guess I should have used vgcfgbackup/vgcfgrestore (which I've now discovered). Any other light to be shone on this situation? The good news is that this is just my os/home drives, and I have a semi-recent backup of the /home part, so I'm not that worried about reinstall and restore. Clearly I'd prefer not to, but it wouldn't be the end of the world. cheers, / Brett

I don't think that is the right conclusion. What is the output of running: sudo fdisk -l /dev/sdg sudo pvdisplay -m sudo vgdisplay On 12/10/11 08:11, Brett Pemberton wrote:
Upon some sleep, I'm guessing the issue is that the instructions I followed either assume you aren't removing the primary PV or don't care.
I guess the lvm metadata is stored on there, which contains VG and LV information, and upon doing 'vgextend system /dev/newpv' it doesn't mirror that information to the new pv.
So by removing the old pv, I've shot myself in the foot.
I guess I should have used vgcfgbackup/vgcfgrestore (which I've now discovered).
Any other light to be shone on this situation?
The good news is that this is just my os/home drives, and I have a semi-recent backup of the /home part, so I'm not that worried about reinstall and restore. Clearly I'd prefer not to, but it wouldn't be the end of the world.
cheers,
/ Brett
_______________________________________________ luv-main mailing list luv-main@lists.luv.asn.au http://lists.luv.asn.au/listinfo/luv-main
-- .signature

On Wed, Oct 12, 2011 at 11:25 AM, Toby Corkindale < toby.corkindale@strategicdata.com.au> wrote:
I don't think that is the right conclusion.
What is the output of running: sudo fdisk -l /dev/sdg
Disk /dev/sdg: 2000 GB, 2000396321280 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdg1 1 60800 488375968 8e Linux LVM Warning: Partition 1 does not end on cylinder boundary. /dev/sdg2 60800 243202 1465144065 fd Lnx RAID auto Warning: Partition 2 does not end on cylinder boundary.
sudo pvdisplay -m
root@ubuntu:~# pvdisplay -m "/dev/sdg1" is a new physical volume of "465.74 GiB" --- NEW Physical volume --- PV Name /dev/sdg1 VG Name PV Size 465.74 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID okjC2x-EKtY-8Tf7-gkqN-Q7Oj-Kixj-8aTZ2T
sudo vgdisplay
root@ubuntu:~# vgdisplay No volume groups found I've been getting some help in #lvm on freenode. I've backed up my lvm header on /dev/sdg1, and looked at it, and found old(er) configurations still present. http://pastebin.com/ifJg309W
From this, I was able to vgcfgrestore -f /bakupcfg system and see my volumes again. However, they were bunk ... I'm guessing the configuration I restored didn't accurately describe the layout, so instead of filesystems, it just found "data" according to file -s.
In the meantime, have restored the original header. More ideas are always welcome. cheers, / Brett

In the meantime, have restored the original header.
More ideas are always welcome.
Progress has been made! I was able to "find" my xfs filesystems for / and /home on the new pv, and dd them into a mountable state. Thanks mostly to: http://www.linuxweblog.com/backup-restore-lvm-dd So I now have access to /etc/lvm/archive, which contains the lvm config before I executed the last command. Allegedly I should now be able to restore my system to this state, by doing: pvcreate -ff --uuid 3mpxNE-c7Ll-sK5G-MPWq-QZd8-oxvh-E0M7ex --restorefile /tmp/archive/system_00034.vg /dev/sdg1 To restore the pv to how it was, then vgcfgrestore --file /tmp/archive/system_00034.vg system to restore the vgs However, the next snag is: root@ubuntu:/tmp# pvcreate -ff --uuid 3mpxNE-c7Ll-sK5G-MPWq-QZd8-oxvh-E0M7ex --restorefile /tmp/archive/system_00034.vg /dev/sdg1 Couldn't find device with uuid aMQJEI-DzSA-BzZ8-Kk8G-tOyz-m2n1-VWXTOe. Couldn't find device with uuid 3mpxNE-c7Ll-sK5G-MPWq-QZd8-oxvh-E0M7ex. Can't open /dev/sdg1 exclusively. Mounted filesystem? Can't open /dev/sdg1 exclusively? Why not? No filesystems are mounted, lsof says nothing is open. Googling suggests dmraid may have 'stolen' the partition, but I have no idea how to reclaim it. At least if tonight I need to trash and start from scratch, I have my filesystems to restore to. cheers, / Brett
participants (2)
-
Brett Pemberton
-
Toby Corkindale