meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
vm:proxmox:ceph:move_db [2021/08/05 11:31] niziakvm:proxmox:ceph:move_db [2023/01/31 21:22] niziak
Line 9: Line 9:
 systemctl disable ceph-osd@3 systemctl disable ceph-osd@3
  
-ddrescue -f -n -vv <OLD> <NEW>+ddrescue -f -n -vv c66f0c74-12b2-aa42-afc4-f4bd12bfa87c DB.img<NEW> 
 +</code>
  
-  * dump NVM DB partition: ''dd if=/dev/nvme0n1p3 of=DB.img bs=4M'' +Replace disk, restore partition layout.
-  * shutdown machine and replace NVM drive+
  
 <code bash> <code bash>
 +ddrescue -f -n -vv DB.img /dev/nvme0n1p3
 +</code>
 +
 +Restore original UUID
 +<code bash>
 +gpart /dev/nvme0n1
 +x
 +c
 +3
 +<UUID>
 +w
 +</code>
 +
 +<code bash>
 +partprobe
 +</code>
 +
 +Verify ceph osd config:
 +<code bash>
 +ceph-volume lvm list
 +</code>
 +
 +Activate volume:
 +<code bash>
 +ceph-volume lvm activate --all
 +</code>
 +
 +===== LVM way =====
 +
 +<code bash>
 +pvcreate /dev/sdd7
 +vgextend ceph-bf4ade97-581a-4832-a517-2d83503fe01d /dev/sdd7
 +
 +lvscan
 +pvmove -n /dev/ceph-bf4ade97-581a-4832-a517-2d83503fe01d/osd-db-40769e33-2fb8-431d-8996-40870e35c3ee /dev/sdc7 /dev/sdd7
 +
 +vgreduce ceph-bf4ade97-581a-4832-a517-2d83503fe01d /dev/sdc7
 +pvremove /dev/sdc7
 +</code>
 +
 +Activate volume:
 +<code bash>
 +ceph-volume lvm list
 +
 +  [db]          /dev/ceph-bf4ade97-581a-4832-a517-2d83503fe01d/osd-db-40769e33-2fb8-431d-8996-40870e35c3ee
 +
 +      block device              /dev/ceph-feea9396-abeb-4b50-8cdc-3a35598ec651/osd-block-c794c0d0-0515-4bfb-bb54-00656fa8712d
 +      db device                 /dev/ceph-bf4ade97-581a-4832-a517-2d83503fe01d/osd-db-40769e33-2fb8-431d-8996-40870e35c3ee
 +      db uuid                   5zRs08-Hif1-hIFp-rNfO-WTah-26MG-ojnLVh
 +      devices                   /dev/sdd7
 +
 +ceph-volume lvm activate --all
 +</code>
 +
 +
 +====== DRAFTS: ======
 +
 +<code>
 zfs create -V 5GB rpool/data/DB zfs create -V 5GB rpool/data/DB
  
Line 32: Line 90:
  
  
- 
-DRAFT: 
  
 ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-12 --devs-source /var/lib/ceph/osd/ceph-12/block --devs-source /var/lib/ceph/osd/ceph-12/block.db --command bluefs-bdev-migrate --dev-target --dev-target /dev/vdg1 ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-12 --devs-source /var/lib/ceph/osd/ceph-12/block --devs-source /var/lib/ceph/osd/ceph-12/block.db --command bluefs-bdev-migrate --dev-target --dev-target /dev/vdg1