meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
vm:proxmox:ceph:move_db [2021/08/05 10:58] niziakvm:proxmox:ceph:move_db [2023/01/31 21:22] niziak
Line 1: Line 1:
 ====== Move DB to new drive ====== ====== Move DB to new drive ======
  
-  * set global flags ''noout'' and ''noin''. +<code bash> 
-  * stop OSD ''systemctl stop ceph-osd@3'' and disable autostart ''systemctl disable ceph-osd@3'' +blkid 
-  * dump NVM DB partition: ''dd if=/dev/nvme0n1p3 of=DB.img bs=4M'' +/dev/nvme0n1p3: PARTLABEL="DB" PARTUUID="b30d904f-94ce-4776-982a-db5947dac1cd" 
-  * shutdown machine and replace NVM drive+ 
 +ceph osd set noout 
 +systemctl stop ceph-osd@3 
 +systemctl disable ceph-osd@3 
 + 
 +ddrescue -f -n -vv c66f0c74-12b2-aa42-afc4-f4bd12bfa87c DB.img<NEW> 
 +</code> 
 + 
 +Replace disk, restore partition layout.
  
 <code bash> <code bash>
 +ddrescue -f -n -vv DB.img /dev/nvme0n1p3
 +</code>
 +
 +Restore original UUID
 +<code bash>
 +gpart /dev/nvme0n1
 +x
 +c
 +3
 +<UUID>
 +w
 +</code>
 +
 +<code bash>
 +partprobe
 +</code>
 +
 +Verify ceph osd config:
 +<code bash>
 +ceph-volume lvm list
 +</code>
 +
 +Activate volume:
 +<code bash>
 +ceph-volume lvm activate --all
 +</code>
 +
 +===== LVM way =====
 +
 +<code bash>
 +pvcreate /dev/sdd7
 +vgextend ceph-bf4ade97-581a-4832-a517-2d83503fe01d /dev/sdd7
 +
 +lvscan
 +pvmove -n /dev/ceph-bf4ade97-581a-4832-a517-2d83503fe01d/osd-db-40769e33-2fb8-431d-8996-40870e35c3ee /dev/sdc7 /dev/sdd7
 +
 +vgreduce ceph-bf4ade97-581a-4832-a517-2d83503fe01d /dev/sdc7
 +pvremove /dev/sdc7
 +</code>
 +
 +Activate volume:
 +<code bash>
 +ceph-volume lvm list
 +
 +  [db]          /dev/ceph-bf4ade97-581a-4832-a517-2d83503fe01d/osd-db-40769e33-2fb8-431d-8996-40870e35c3ee
 +
 +      block device              /dev/ceph-feea9396-abeb-4b50-8cdc-3a35598ec651/osd-block-c794c0d0-0515-4bfb-bb54-00656fa8712d
 +      db device                 /dev/ceph-bf4ade97-581a-4832-a517-2d83503fe01d/osd-db-40769e33-2fb8-431d-8996-40870e35c3ee
 +      db uuid                   5zRs08-Hif1-hIFp-rNfO-WTah-26MG-ojnLVh
 +      devices                   /dev/sdd7
 +
 +ceph-volume lvm activate --all
 +</code>
 +
 +
 +====== DRAFTS: ======
 +
 +<code>
 zfs create -V 5GB rpool/data/DB zfs create -V 5GB rpool/data/DB
  
Line 16: Line 82:
 </code> </code>
  
 +Rollback:
 +<code bash>
 +ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-3 --devs-source /var/lib/ceph/osd/ceph-3/block.db --command bluefs-bdev-migrate --dev-target /dev/nvme0n1p3
 +</code>
 +
 +/usr/bin/ln -snf /dev/zd144 /var/lib/ceph/osd/ceph-3/block.db
  
  
-DRAFT: 
  
 ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-12 --devs-source /var/lib/ceph/osd/ceph-12/block --devs-source /var/lib/ceph/osd/ceph-12/block.db --command bluefs-bdev-migrate --dev-target --dev-target /dev/vdg1 ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-12 --devs-source /var/lib/ceph/osd/ceph-12/block --devs-source /var/lib/ceph/osd/ceph-12/block.db --command bluefs-bdev-migrate --dev-target --dev-target /dev/vdg1