
I've got some filesystems that I want to convert to BTRFS. They currently are Ext3/4 on Linux Software RAID-1. My plan is to break the RAID, convert one of the constituent devices with the btrfs-convert program, and then add the other one via "btrfs device add". How do I get the new BTRFS device to use RAID-1 for data and metadata? Avi, from your talk I got the impression that this was already possible but the man page doesn't reveal how. I'm using btrfs-tools version 0.19+20120328-3 and kernel 3.2.0-2-amd64 from Debian/Unstable. Do I need to get a newer version of the kernel or the tools? PS Great talk Avi, it was good that you had more than an hour. Could you give another talk in Dec or Feb? -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 17/06/2012, at 8:21 PM, Russell Coker wrote:
How do I get the new BTRFS device to use RAID-1 for data and metadata? Avi, from your talk I got the impression that this was already possible but the man page doesn't reveal how.
# mkfs.btrfs -m raid1 -d raid1 /dev/sdX
I'm using btrfs-tools version 0.19+20120328-3 and kernel 3.2.0-2-amd64 from Debian/Unstable. Do I need to get a newer version of the kernel or the tools?
Nope, that'll work. Though, I do recommend a 3.4 kernel for the latest btrfs set.
PS Great talk Avi, it was good that you had more than an hour. Could you give another talk in Dec or Feb?
Not sure I'll be in the country in December, but February may be possible. Will have to see closer to the time.

On Sun, 17 Jun 2012, Avi Miller <avi.miller@gmail.com> wrote:
On 17/06/2012, at 8:21 PM, Russell Coker wrote:
My plan is to break the RAID, convert one of the constituent devices with the btrfs-convert program, and then add the other one via "btrfs device add".
How do I get the new BTRFS device to use RAID-1 for data and metadata? Avi, from your talk I got the impression that this was already possible but the man page doesn't reveal how.
# mkfs.btrfs -m raid1 -d raid1 /dev/sdX
That works if you have empty disk space. But if you are converting from ext3/4 then mkfs isn't the option. The only way I can do this without some variant of backup/format/restore which involves backing up 260G of data at a time is to convert one half of the RAID to BTRFS and then add the other back in. With Linux Software RAID-1 it's a standard installation process to install on a degraded RAID array and then insert a second disk to complete the array afterwards. I'd like to do something similar with BTRFS but starting with btrfs-convert.
I'm using btrfs-tools version 0.19+20120328-3 and kernel 3.2.0-2-amd64 from Debian/Unstable. Do I need to get a newer version of the kernel or the tools?
Nope, that'll work. Though, I do recommend a 3.4 kernel for the latest btrfs set.
Debian won't be supporting 3.4 for ages, we are in the middle of finalising a release with 3.2 and I think it's safe to predict that the kernel team will get that well out of the way before even thinking about 3.4. So the question is, is it worth the pain of running an Oracle kernel with Debian for the newer BTRFS code? Also I'll probably install some systems with Oracle Dom0 and Debian DomUs, that should be easy to setup and manage. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Sun, 17 Jun 2012, Avi Miller <avi.miller@gmail.com> wrote:
On 17/06/2012, at 8:21 PM, Russell Coker wrote:
My plan is to break the RAID, convert one of the constituent devices with the btrfs-convert program, and then add the other one via "btrfs device add".
How do I get the new BTRFS device to use RAID-1 for data and metadata? Avi, from your talk I got the impression that this was already possible but the man page doesn't reveal how.
# mkfs.btrfs -m raid1 -d raid1 /dev/sdX
That works if you have empty disk space. But if you are converting from ext3/4 then mkfs isn't the option.
The only way I can do this without some variant of backup/format/restore which involves backing up 260G of data at a time is to convert one half of the RAID to BTRFS and then add the other back in.
With Linux Software RAID-1 it's a standard installation process to install on a degraded RAID array and then insert a second disk to complete the array afterwards. I'd like to do something similar with BTRFS but starting with btrfs-convert.
In the absence of some nifty btrfs-foo, if you're only talking about 260GB and you have a couple of external USB disks and there isn't a particularly high io load you should be able to: 1. break your raid 2. mirror existing RAID onto one of the USB's 3. create BTRFS onto the other original RAID disk and the other USB 4. copy data across 5. remove both USB's (maybe a good test of BTRFS to see how it likes one of its disks disappearing!) 6. tear down the old RAID and add the disk into the BTRFS in place of the now-removed USB If you can handle a temporary lack of redundancy you could use one USB disk and skip step #2. The advantage of doing the above though is that you have a ready-to-go copy of your data in case BTRFS suffers a sudden and unexplained total existence failure while you are putting it through its paces. James

----- Original Message -----
From: "James Harper" <james.harper@bendigoit.com.au> 5. remove both USB's (maybe a good test of BTRFS to see how it likes one of its disks disappearing!)
In my testing (albeit only up to kernel 3.2), the result of that was a complete kernel segfault and halt. Btrfs really did NOT like having its devices disappear from under it!

----- Original Message -----
From: "James Harper" <james.harper@bendigoit.com.au> 5. remove both USB's (maybe a good test of BTRFS to see how it likes one of its disks disappearing!)
In my testing (albeit only up to kernel 3.2), the result of that was a complete kernel segfault and halt. Btrfs really did NOT like having its devices disappear from under it!
Ouch. Better than losing your data (assuming you didn't!) but still not really desirable. I wonder if it was BTRFS causing the crash or the USB subsystem. James

For the record, ZFS on the same system coped fine with the same test. (ie. Rudely unplugged USB drives that were part of a RAIDZ array). (The kernel md infrastructure coped OK too) ----- Original Message ----- From: "James Harper" <james.harper@bendigoit.com.au> To: "Toby Corkindale" <toby.corkindale@strategicdata.com.au> Cc: russell@coker.com.au, luv-main@luv.asn.au Sent: Monday, 18 June, 2012 12:05:54 PM Subject: RE: BTRFS conversion
----- Original Message -----
From: "James Harper" <james.harper@bendigoit.com.au> 5. remove both USB's (maybe a good test of BTRFS to see how it likes one of its disks disappearing!)
In my testing (albeit only up to kernel 3.2), the result of that was a complete kernel segfault and halt. Btrfs really did NOT like having its devices disappear from under it!
Ouch. Better than losing your data (assuming you didn't!) but still not really desirable. I wonder if it was BTRFS causing the crash or the USB subsystem. James

On 18/06/12 14:42, Toby Corkindale wrote:
For the record, ZFS on the same system coped fine with the same test.
It's a bit of an apples to oranges comparison though, ZFS is a mature filesystem with years of in the field use, btrfs is still in development and clearly marked in the kernel as experimental. cheers, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

On Mon, 18 Jun 2012, Chris Samuel <chris@csamuel.org> wrote:
It's a bit of an apples to oranges comparison though, ZFS is a mature filesystem with years of in the field use, btrfs is still in development and clearly marked in the kernel as experimental.
It's also supported by Oracle and described by their employees (such as Avi) as being ready for production use. Oracle isn't known for losing people's data. On Mon, 18 Jun 2012, Jason White <jason@jasonjgw.net> wrote:
Btrfs is approximately 5 years old now, by my estimate. How long does it usually take for a new file system to reach the point at which most users in most scenarios don't run into serious problems?
Since they made BTRFS handle lack of disk space better BTRFS has been working well for most users in most common desktop scenarios. I've had some less important machines running BTRFS for /home for well over a year without any problems. But in terms of how long it takes for a filesystem to be regarded as ready, some people claim that it's as long as 10 years elapsed or 100 person years of development. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 18/06/12 15:22, Russell Coker wrote:
It's also supported by Oracle and described by their employees (such as Avi) as being ready for production use.
I would suggest that any suggestion that btrfs is ready for production use is, to quote Sir Humphrey Appleby, "courageous". :-) You only need to follow the btrfs list for a while to hear what can happen to users and yes, that has led to people losing data (because they failed to have backups when their filesystem went belly up). cheers, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

Chris Samuel <chris@csamuel.org> wrote:
You only need to follow the btrfs list for a while to hear what can happen to users and yes, that has led to people losing data (because they failed to have backups when their filesystem went belly up).
I remember when I was doing voluntary "beta" testing of the Linux version of XFS. The traffic on the XFS list often fit the above description. A possible difference, though, is that many of the problems turned out to be hardware-related, but some were indeed due to bugs that were then fixed.

On 18/06/2012, at 11:51 AM, Toby Corkindale wrote:
----- Original Message -----
From: "James Harper" <james.harper@bendigoit.com.au> 5. remove both USB's (maybe a good test of BTRFS to see how it likes one of its disks disappearing!)
In my testing (albeit only up to kernel 3.2), the result of that was a complete kernel segfault and halt. Btrfs really did NOT like having its devices disappear from under it!
This shouldn't happen in 3.4 or higher - but you're right that btrfs really doesn't like having it's devices disappear.

On 18/06/12 12:08, Avi Miller wrote:
This shouldn't happen in 3.4 or higher - but you're right that btrfs really doesn't like having it's devices disappear.
Yup. Also be warned that if you overfill your filesystem you may find you cannot delete any files as btrfs will be wanting to do COW the metadata first and fail. Two possible work arounds: 1) echo > /btrfs/very-large-file.iso 2) Remount with nodatacow. Also note that one nasty regression has been reported against 3.4 over 3.2, it can massively increase its metadata usage - the reported example showed a filesystem exploding from 10GB of metadata to 84GB but with only about 6GB actually used. It's been bisected to cf1d72c9ceec391d34c48724da57282e97f01122, and apparently a btrfs balance can bring it back down again, but I don't believe it's been fixed yet. It's for very good reason btrfs is described as: Btrfs is highly experimental, and THE DISK FORMAT IS NOT YET FINALIZED. You should say N here unless you are interested in testing Btrfs with non-critical data. cheers, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

Chris Samuel <chris@csamuel.org> wrote:
It's for very good reason btrfs is described as:
Btrfs is highly experimental, and THE DISK FORMAT IS NOT YET FINALIZED. You should say N here unless you are interested in testing Btrfs with non-critical data.
Btrfs is approximately 5 years old now, by my estimate. How long does it usually take for a new file system to reach the point at which most users in most scenarios don't run into serious problems? Note that EXT3 to EXT4, for example, isn't a good comparison because it's essentially a revision of an existing design rather than a new one. Of course, Btrfs is also more complex than anything that comes to mind other than Zfs and perhaps certain network-based file systems that have their own on-disk formats. The question is: how is Btrfs progressing relative to what one would expect in this time-frame?

On Mon, 18 Jun 2012, Jason White wrote:
Btrfs is approximately 5 years old now, by my estimate. How long does it usually take for a new file system to reach the point at which most users in most scenarios don't run into serious problems?
ZFS in Solaris: http://en.wikipedia.org/wiki/ZFS#Release_history ZFS was designed and implemented by a team at Sun led by Jeff Bonwick. It was announced on September 14, 2004.[5] Source code for ZFS was integrated into the main trunk of Solaris development on October 31, 2005[6] and released as part of build 27 of OpenSolaris on November 16, 2005. Sun announced that ZFS was included in the 6/06 update to Solaris 10 in June 2006, one year after the opening of the OpenSolaris community.[7] ZFS in FreeBSD: FreeBSD 7.0-RELEASE Announcement (27 Feb 2008) - Experimental support for Sun's ZFS filesystem FreeBSD 8.0-RELEASE Announcement (November 2009) - ZFS is no longer in experimental status. Regards Peter

On Mon, 18 Jun 2012 15:01:04 +1000, Jason White <jason@jasonjgw.net> wrote:
Chris Samuel <chris@csamuel.org> wrote:
It's for very good reason btrfs is described as:
Btrfs is highly experimental, and THE DISK FORMAT IS NOT YET FINALIZED. You should say N here unless you are interested in testing Btrfs with non-critical data.
Btrfs is approximately 5 years old now, by my estimate. How long does it usually take for a new file system to reach the point at which most users in most scenarios don't run into serious problems?
Well, Stewart's rule is 10 years, but others have a rule of 5 years. Note that ext3 is probably about 10, ext2 may be 15 or so (call me ignorant), XFS is nearing 20 and ZFS loses a few years of maturity for being not very widely used. Not all filesystems have a fsck either. Things do go horribly wrong, and more often than anyone would like. -- Stewart Smith

On Mon, 18 Jun 2012, Avi Miller wrote:
On 18/06/2012, at 11:51 AM, Toby Corkindale wrote:
----- Original Message -----
From: "James Harper" <james.harper@bendigoit.com.au> 5. remove both USB's (maybe a good test of BTRFS to see how it likes one of its disks disappearing!)
In my testing (albeit only up to kernel 3.2), the result of that was a complete kernel segfault and halt. Btrfs really did NOT like having its devices disappear from under it!
This shouldn't happen in 3.4 or higher - but you're right that btrfs really doesn't like having it's devices disappear.
Multiply redundant single points of failure. I love it :) -- Tim Connors

On Mon, Jun 18, 2012 at 02:29:05PM +1000, Tim Connors wrote:
Multiply redundant single points of failure. I love it :)
i prefer the term Multiple Catastrophic Points of Failure. and just remember, no matter how badly you design something it's almost certainly possible to put in a bit more (or less) effort and come up with something much worse. so don't give up too easily. craig (considering switching careers to be a motivational speaker) -- craig sanders <cas@taz.net.au> BOFH excuse #42: spaghetti cable cause packet failure

From: "Avi Miller" <avi.miller@gmail.com> To: "Toby Corkindale" <toby.corkindale@strategicdata.com.au> Cc: "James Harper" <james.harper@bendigoit.com.au>, russell@coker.com.au, luv-main@luv.asn.au Sent: Monday, 18 June, 2012 12:08:27 PM Subject: Re: BTRFS conversion
On 18/06/2012, at 11:51 AM, Toby Corkindale wrote:
----- Original Message -----
From: "James Harper" <james.harper@bendigoit.com.au> 5. remove both USB's (maybe a good test of BTRFS to see how it likes one of its disks disappearing!)
In my testing (albeit only up to kernel 3.2), the result of that was a complete kernel segfault and halt. Btrfs really did NOT like having its devices disappear from under it!
This shouldn't happen in 3.4 or higher - but you're right that btrfs really doesn't like having it's devices disappear.
I had hoped that it would, at best, continue to work (as the devices were set up in a RAID format), or at worst, just throw some errors. The complete halting of the machine was rather abrupt :/ I think I struggled to get btrfs to re-accept the USB drives and to start remirroring as well, but hopefully that has improved too? (Last time I checked was on 3.2.0 with a self-compiled version of the userland, much earlier this year)

Russell Coker wrote:
Nope, that'll work. Though, I do recommend a 3.4 kernel for the latest btrfs set.
Debian won't be supporting 3.4 for ages, we are in the middle of finalising a release with 3.2 and I think it's safe to predict that the kernel team will get that well out of the way before even thinking about 3.4. So the question is, is it worth the pain of running an Oracle kernel with Debian for the newer BTRFS code?
Some of the #btrfs denizens on Freenode seem to be enthusiastically running btrfs DKMS modules on top of their distro's stock (stable) kernel. I didn't inquire as to details because I didn't care, but perhaps it would suit you.

Trent W. Buck <trentbuck@gmail.com> wrote:
Russell Coker wrote:
Debian won't be supporting 3.4 for ages, we are in the middle of finalising a release with 3.2 and I think it's safe to predict that the kernel team will get that well out of the way before even thinking about 3.4. So the question is, is it worth the pain of running an Oracle kernel with Debian for the newer BTRFS code?
Some of the #btrfs denizens on Freenode seem to be enthusiastically running btrfs DKMS modules on top of their distro's stock (stable) kernel. I didn't inquire as to details because I didn't care, but perhaps it would suit you.
Or you could install, e.g., linux-image-3.4-trunk-amd64 from experimental. Caveat: I don't know whether they're keeping up with stable releases in the 3.4 series, but at least it's a 3.4 kernel.

Hi, My apologies - I read and replied late last night without reading properly. On 17/06/2012, at 10:27 PM, Russell Coker wrote:
# mkfs.btrfs -m raid1 -d raid1 /dev/sdX
That works if you have empty disk space. But if you are converting from ext3/4 then mkfs isn't the option.
You're right. In order to do what you want, you would do a btrfs-convert on a single disk, which would result in a default btrfs filesystem, i.e. duplicated metadata, single data. Once you're happy with that, you can then add the other disk to the btrfs filesystem: # btrfs device add /mount/ After both devices are in the filesystem, rebalance to copy chunks across all devices: # btrfs balance start /mount/ Though, you want to get the rebalance engine to change the layout, so you can use the convert option: # btrfs balance convert raid1 /mount/ Check the cmd_balance.c source for verify this command and check the options. I'm betting your ability to read C is better than mine: http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs.git;a=blob;f=cmd...
is, is it worth the pain of running an Oracle kernel with Debian for the newer BTRFS code?
The current Oracle release code is probably at the same point as the 3.2 kernel, so this is probably not necessary. When do a major btrfs mainline merge in a few months, it may be useful.
Also I'll probably install some systems with Oracle Dom0 and Debian DomUs, that should be easy to setup and manage.
Actually, I would probably reverse this: use a Debian Dom0 with Oracle DomUs. The only supported Oracle Dom0 is Oracle VM, for which we do not provide 0-day updates via public-yum (only ULN, which requires a support subscription). Also, Oracle VM 3.1.1 requires an Oracle VM Manager install, which usually implies another machine (or a VM somewhere else). However, you can run Oracle Linux as a HVPVM or PVM guest of any Xen/Dom0 combination you already have.

On Mon, 18 Jun 2012, Avi Miller <avi.miller@gmail.com> wrote:
Also I'll probably install some systems with Oracle Dom0 and Debian DomUs, that should be easy to setup and manage.
Actually, I would probably reverse this: use a Debian Dom0 with Oracle DomUs. The only supported Oracle Dom0 is Oracle VM, for which we do not provide 0-day updates via public-yum (only ULN, which requires a support subscription). Also, Oracle VM 3.1.1 requires an Oracle VM Manager install, which usually implies another machine (or a VM somewhere else). However, you can run Oracle Linux as a HVPVM or PVM guest of any Xen/Dom0 combination you already have.
The problem with this is that accessing data from the Dom0 is required for setting up the DomU and debugging certain types of problem. mkfs.btrfs -d raid1 -m raid1 /dev/btva/smtp /dev/btvb/smtp Yesterday I tried to convert a DomU to BTRFS with a VG for each physical disk and an LV from each one to expose the RAID-1 to the DomU. I ran the above mkfs command and then mounted the filesystem in the Dom0 to get the following result. So not only did it fail to work but it took the Dom0 down as well. Jun 24 20:08:42 ns kernel: [12526.463617] device fsid 0b88fa65-384e-481d-95fb-7d3eee79c304 devid 1 transid 3 /dev/btva/smtp Jun 24 20:08:42 ns kernel: [12526.542930] device fsid 0b88fa65-384e-481d-95fb-7d3eee79c304 devid 2 transid 3 /dev/btvb/smtp Jun 24 20:09:07 ns kernel: [12550.696099] device fsid 0b88fa65-384e-481d-95fb-7d3eee79c304 devid 1 transid 4 /dev/mapper/btva-smtp Jun 24 20:09:07 ns kernel: [12550.725410] btrfs: disk space caching is enabled Jun 24 20:09:07 ns kernel: [12550.748496] unable to find logical 14431102701568 len 4096 Jun 24 20:09:07 ns kernel: [12550.750160] ------------[ cut here ]------------ Jun 24 20:09:07 ns kernel: [12550.751783] kernel BUG at /build/buildd- linux_3.2.20-1-i386-Y0s6CA/linux-3.2.20/fs/btrfs/volumes.c:2932! Jun 24 20:09:07 ns kernel: [12550.752086] invalid opcode: 0000 [#1] SMP Jun 24 20:09:07 ns kernel: [12550.752086] Modules linked in: dm_mirror dm_region_hash dm_log ums_cypress usb_storage uas btrfs crc32c libcrc32c xt_mark iptable_mangle cls_fw sch_sfq sch_htb xt_physdev xen_netback xen_blkback xen_gntdev xen_evtchn xenfs ppp_deflate zlib_deflate bsd_comp softdog ppp_async crc_ccitt ppp_generic slhc ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack xt_tcpudp ipt_REJECT ipt_LOG iptable_filter ip_tables x_tables bridge stp snd_intel8x0 snd_ac97_codec snd_pcm snd_page_alloc snd_timer ac97_bus snd soundcore psmouse e100 pcspkr serio_raw tg3 8139too evdev 8139cp i2c_i801 mii i915 libphy floppy parport_pc video parport uhci_hcd drm_kms_helper drm ehci_hcd i2c_algo_bit usbcore shpchp processor button iTCO_wdt usb_common iTCO_vendor_support thermal_sys rng_core lm85 hwmon_vid i2c_core autofs4 ext3 jbd dm_mod raid1 md_mod ext4 crc16 jbd2 mbcache xen_blkfront sd_mod crc_t10dif ata_generic ata_piix libata scsi_mod Jun 24 20:09:07 ns kernel: [12550.752086] Jun 24 20:09:07 ns kernel: [12550.752086] Pid: 9849, comm: mount Not tainted 3.2.0-2-686-pae #1 Hewlett-Packard HP d530 SFF(DC578AV)/085Ch Jun 24 20:09:07 ns kernel: [12550.752086] EIP: 0061:[<ee35d887>] EFLAGS: 00010282 CPU: 0 Jun 24 20:09:07 ns kernel: [12550.752086] EIP is at __btrfs_map_block+0xe4/0xab6 [btrfs] Jun 24 20:09:07 ns kernel: [12550.752086] EAX: 00000044 EBX: caf83cd8 ECX: caf83c10 EDX: ee386da8 Jun 24 20:09:07 ns kernel: [12550.752086] ESI: 00c01000 EDI: 00000d20 EBP: 00000001 ESP: caf83c0c Jun 24 20:09:07 ns kernel: [12550.752086] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0069 Jun 24 20:09:07 ns kernel: [12550.752086] Process mount (pid: 9849, ti=caf82000 task=c26850c0 task.ti=caf82000) Jun 24 20:09:07 ns kernel: [12550.752086] Stack: Jun 24 20:09:07 ns kernel: [12550.752086] ee386da8 00c01000 00000d20 00001000 00000000 c10ecb70 00001000 00000000 Jun 24 20:09:07 ns kernel: [12550.752086] 00000000 00000000 caf83d4c 00001000 ee359631 c57a50b0 00000000 00000000 Jun 24 20:09:07 ns kernel: [12550.752086] 00000000 00000000 90006008 c57a50ac 00000000 cb3c51c0 d01b7380 00000000 Jun 24 20:09:07 ns kernel: [12550.752086] Call Trace: Jun 24 20:09:07 ns kernel: [12550.752086] [<c10ecb70>] ? bio_add_page+0x3f/0x46 Jun 24 20:09:07 ns kernel: [12550.752086] [<ee359631>] ? submit_extent_page.isra.18+0x15c/0x1a3 [btrfs] Jun 24 20:09:07 ns kernel: [12550.752086] [<ee358cd2>] ? repair_io_failure+0x19a/0x19a [btrfs] Jun 24 20:09:07 ns kernel: [12550.752086] [<ee359c0f>] ? __extent_read_full_page+0x578/0x64a [btrfs] Jun 24 20:09:07 ns kernel: [12550.752086] [<ee3604dd>] ? btrfs_map_bio+0x6d/0x1ba [btrfs] Jun 24 20:09:07 ns kernel: [12550.752086] [<ee33d654>] ? btrfs_wq_submit_bio+0x130/0x130 [btrfs] Jun 24 20:09:07 ns kernel: [12550.752086] [<ee356766>] ? submit_one_bio+0x85/0xb4 [btrfs] Jun 24 20:09:07 ns kernel: [12550.752086] [<ee35b5e9>] ? read_extent_buffer_pages+0x22a/0x299 [btrfs] Jun 24 20:09:07 ns kernel: [12550.752086] [<ee33ca24>] ? btree_read_extent_buffer_pages.isra.70+0x3d/0x9b [btrfs] Jun 24 20:09:07 ns kernel: [12550.752086] [<ee33b959>] ? lock_page+0x1f/0x1f [btrfs] Jun 24 20:09:07 ns kernel: [12550.752086] [<ee33d967>] ? read_tree_block+0x2d/0x3e [btrfs] Jun 24 20:09:07 ns kernel: [12550.752086] [<ee3400e9>] ? open_ctree+0xd4b/0x12ae [btrfs] Jun 24 20:09:07 ns kernel: [12550.752086] [<c1162ed5>] ? snprintf+0x16/0x18 Jun 24 20:09:07 ns kernel: [12550.752086] [<c110a61b>] ? disk_name+0x1f/0x5b Jun 24 20:09:07 ns kernel: [12550.752086] [<ee326de9>] ? btrfs_mount+0x43c/0x71f [btrfs] Jun 24 20:09:07 ns kernel: [12550.752086] [<c10a8b8d>] ? pcpu_next_pop+0x28/0x2f Jun 24 20:09:07 ns kernel: [12550.752086] [<c10a9971>] ? pcpu_alloc+0x6b6/0x6cc Jun 24 20:09:07 ns kernel: [12550.752086] [<c10c0f5e>] ? __kmalloc_track_caller+0x9b/0xa7 Jun 24 20:09:07 ns kernel: [12550.752086] [<c10ce813>] ? mount_fs+0x55/0x122 Jun 24 20:09:07 ns kernel: [12550.752086] [<c10dea29>] ? vfs_kern_mount+0x4a/0x77 Jun 24 20:09:07 ns kernel: [12550.752086] [<c10ded49>] ? do_kern_mount+0x2f/0xac Jun 24 20:09:07 ns kernel: [12550.752086] [<c10e0053>] ? do_mount+0x5d0/0x61e Jun 24 20:09:07 ns kernel: [12550.752086] [<c12c0dc1>] ? _cond_resched+0x5/0x18 Jun 24 20:09:07 ns kernel: [12550.752086] [<c10a67a9>] ? memdup_user+0x26/0x43 Jun 24 20:09:07 ns kernel: [12550.752086] [<c10e02da>] ? sys_mount+0x67/0x96 Jun 24 20:09:07 ns kernel: [12550.752086] [<c12c1f24>] ? syscall_call+0x7/0xb Jun 24 20:09:07 ns kernel: [12550.752086] Code: 89 4c 24 14 e8 7b ea ff ff 58 59 8b 4c 24 0c 85 c9 75 1a 8b 9c 24 88 00 00 00 ff 73 04 ff 33 57 56 68 a8 6d 38 ee e8 a1 fd f5 d2 <0f> 0b 8b 51 10 8b 41 0c 39 fa 89 44 24 48 89 54 24 4c 77 2e 72 Jun 24 20:09:07 ns kernel: [12550.752086] EIP: [<ee35d887>] __btrfs_map_block+0xe4/0xab6 [btrfs] SS:ESP 0069:caf83c0c Jun 24 20:09:07 ns kernel: [12551.134318] ---[ end trace 03d1e882374604b4 ]--- -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 25/06/12 14:48, Russell Coker wrote:
So not only did it fail to work but it took the Dom0 down as well.
Report it to the btrfs list please.. Be warned they'll likely ask you to upgrade to 3.5-rc4 to see if the problem still exists there.. cheers, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

On 25/06/2012, at 2:48 PM, Russell Coker wrote:
The problem with this is that accessing data from the Dom0 is required for setting up the DomU and debugging certain types of problem.
I use another VM for this, i.e. I block-attach the disk images to another VM, and don't rely on support within Dom0.
Yesterday I tried to convert a DomU to BTRFS with a VG for each physical disk and an LV from each one to expose the RAID-1 to the DomU.
I second Chris' email: please send that to the btrfs list. Out of curiosity, did you run "btrfs device scan" first? Or perhaps try mounting it and specifying all the devices in the mount options (see the wiki for syntax)?
participants (12)
-
Avi Miller
-
Brian May
-
Chris Samuel
-
Craig Sanders
-
James Harper
-
Jason White
-
Peter Ross
-
Russell Coker
-
Stewart Smith
-
Tim Connors
-
Toby Corkindale
-
Trent W. Buck