Changing partitions and zfs disklayout without downtime?

When setting up my SSD zpool – which I basically wanted because I wanted *no* constant disk-access to my disk-based zpool, I was lazy. Laziness almost never pays off.

Before starting with ZFS, I had almost my full SSD as an LVM physical volume (except the bootdisk). Then, I added my external disk cabinet and set up a mirrored ZFS pool on it:

hassio# zpool status
  pool: nasdisk
 state: ONLINE
  scan: scrub repaired 0B in 0 days 03:34:33 with 0 errors on Fri Dec 17 16:02:07 2021
config:

        NAME         STATE     READ WRITE CKSUM
        nasdisk      ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            sda      ONLINE       0     0     0
            sdb      ONLINE       0     0     0

Then, after a while, I added a second pool, znvm, that only had one disk: A logical volume from the internal SSD.

But ZFS on top of a logical volume isn’t a particulary good idea. You risk confusing ZFS thoroghly, especially IO-wise, and might end up with less performance and more overhead on your I/O.

Since my NAS-box is headless, I hate booting it and I kept putting off the operation of cleaning up. I wanted to reduce the size of the phyical volume to basically only contain the root volume, and then create a new partitition that could hold my SSD zpool, znvm.

After some research today, I decided it was worth it to try doing it on a live machine. Warning: If doing this, is probably an extremely good idea to take a full backup first!

Here is my steps:

  1. Replace the logical volume in the zpool with a partition on a spare disk. Luckily, I had one.
  2. Remove the logical volume that the zpool earlier was residing on
  3. Reduce the size of the physical volume to the new desired size, as seen from LVM
  4. Reduce the size of the SSD partitition the physical volume is living on to the size of the physical volume, Here, it is a good idea to add an extra GB or two, for your piece of mind. Too little disk here, and your volume group is corrupt!
  5. Add the new partitition that the zpool should use on the SSD.
  6. Replace the temporary partition in the zpool with the new partition in step 5.
  7. And that’s actually it!

Here are the commands:

zpool status znvm (and find the name of the volume)
fdisk /dev/sdc (temporary disk) - create partition sdc1
zpool replace znvm <old_volume_name> /dev/sdc1
zpool status znvm (and wait until it is finished!)
lvremove /dev/ubuntu-vg/zpool_nvm (that was used for znvm)
pvresize 201G /dev/nvme0n1p3 (201G is the new size, nvme0n1p3 is the partition holding the volume)
cfdisk /dev/nvme0n1 (change the size of nvme0n1p3 to 205G and add a new nvme0n1p4 for the zpool that is as large as /dev/sdc1)
zpool replace znvm sdc1 /dev/nvme0n1p4
zpool status (until it is finished)

And that is the whole operation!

Legg igjen en kommentar

Din e-postadresse vil ikke bli publisert.

Dette nettstedet bruker Akismet for å redusere spam. Lær om hvordan dine kommentar-data prosesseres.