
Is there a way that I can, on a running system, mark a btrfs disk as having failed, so that it will now be "missing" and the array will be in a degraded state? I can obviously do it by using fdisk to delete the partition, then rebooting and mounting with the degraded option, but I want to do it without a reboot, and without having to tinker with the boot process remotely. I can also delete the device (move all the data off the device), then delete it, then add the new device, then rebalance to move all the data back, and that would be safer, but would be terribly slow. Also I'm not sure I have enough free space to allow this (maybe I do, but it would be tight enough that I can't be sure there wouldn't be some overhead I haven't taken into account) Thanks James

Hi James, I find this for btrfs device delete <dev> [<dev>..] <path> Remove device(s) from a filesystem identified by <path>. device add <dev> [<dev>..] <path> Add device(s) to the filesystem identified by <path>. https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices#Re... gives more details, including the "delete missing" option, I cannot find at my CentOS 6 box manpage I just quoted. I hope it helps. I looked it up out of curiosity, I am used to ZFS, to "zpool offline" and "zpool replace". Regards Peter On Mon, Feb 1, 2016 at 10:52 PM, James Harper via luv-main < luv-main@luv.asn.au> wrote:
Is there a way that I can, on a running system, mark a btrfs disk as having failed, so that it will now be “missing” and the array will be in a degraded state?
I can obviously do it by using fdisk to delete the partition, then rebooting and mounting with the degraded option, but I want to do it without a reboot, and without having to tinker with the boot process remotely.
I can also delete the device (move all the data off the device), then delete it, then add the new device, then rebalance to move all the data back, and that would be safer, but would be terribly slow. Also I’m not sure I have enough free space to allow this (maybe I do, but it would be tight enough that I can’t be sure there wouldn’t be some overhead I haven’t taken into account)
Thanks
James
_______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main
participants (2)
-
James Harper
-
Peter Ross