meta data for this page
Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
vm:proxmox:ceph:db:move_db_disks [2024/01/18 15:53] niziak |
vm:proxmox:ceph:db:move_db_disks [2024/02/18 16:28] (current) niziak |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== DB ====== | ||
- | |||
====== Move DB to bigger storage ====== | ====== Move DB to bigger storage ====== | ||
- | I.e: to move from 4GB partition to 30GB partition in case of ''BlueFS spillover detected'' | + | I.e: to move from 4GB partition to 30GB partition in case of spillover: |
- | <code bash> | + | <code> |
+ | 1 OSD(s) experiencing BlueFS spillover | ||
+ | |||
+ | osd.3 spilled over 985 MiB metadata from 'db' device (3.8 GiB used of 4.0 GiB) to slow device | ||
+ | </code> | ||
+ | |||
+ | |||
+ | <code> | ||
lvdisplay | lvdisplay | ||
--- Logical volume --- | --- Logical volume --- | ||
Line 41: | Line 46: | ||
</code> | </code> | ||
- | Create new 41GB partition (/dev/nvme0n1p6) | + | ===== resize exsiting DB partition ===== |
+ | |||
+ | Remove separate DB and migrate data back to main storage: | ||
+ | <code bash> | ||
+ | ceph osd set noout | ||
+ | systemctl stop ceph-osd@3.service | ||
+ | |||
+ | cat /var/lib/ceph/osd/ceph-3/fsid | ||
+ | lvdisplay | ||
+ | |||
+ | ceph-volume lvm migrate --osd-id 3 --osd-fsid 024c05b3-6e22-4df1-b0af-5cb46725c5c8 --from db --target ceph-5582d170-f77e-495c-93c6-791d9310872c/osd-block-024c05b3-6e22-4df1-b0af-5cb46725c5c8 | ||
+ | --> Migrate to existing, Source: ['--devs-source', '/var/lib/ceph/osd/ceph-0/block.db'] Target: /var/lib/ceph/osd/ceph-0/block | ||
+ | --> Migration successful. | ||
+ | </code> | ||
+ | |||
+ | Create new PV,VG,LV for DB and attach it: | ||
+ | |||
+ | <code bash> | ||
+ | pvcreate /dev/nvme0n1p6 | ||
+ | vgcreate ceph-db-8gb /dev/nvme0n1p6 | ||
+ | lvcreate -n db-8gb -l 100%FREE ceph-db-8gb | ||
+ | |||
+ | ceph-volume lvm new-db --osd-id 0 --osd-fsid 024c05b3-6e22-4df1-b0af-5cb46725c5c8 --target ceph-db-8gb/db-8gb | ||
+ | --> Making new volume at /dev/ceph-db-8gb/db-8gb for OSD: 0 (/var/lib/ceph/osd/ceph-0) | ||
+ | Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block.db | ||
+ | Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 | ||
+ | --> New volume attached. | ||
+ | </code> | ||
+ | |||
+ | <code bash> | ||
+ | ceph-volume lvm migrate --osd-id 0 --osd-fsid 024c05b3-6e22-4df1-b0af-5cb46725c5c8 --from data db --target ceph-db-8gb/db-8gb | ||
+ | --> Migrate to existing, Source: ['--devs-source', '/var/lib/ceph/osd/ceph-0/block'] Target: /var/lib/ceph/osd/ceph-0/block.db | ||
+ | --> Migration successful. | ||
+ | </code> | ||
+ | |||
+ | <code bash> | ||
+ | systemctl start ceph-osd@3.service | ||
+ | ceph osd unset noout | ||
+ | </code> | ||
+ | |||
+ | ===== OR: create new one ===== | ||
+ | |||
+ | |||
+ | Create new 41GB (41984MB) partition (/dev/nvme0n1p6) | ||
<code> | <code> | ||
pvcreate /dev/nvme0n1p6 | pvcreate /dev/nvme0n1p6 | ||
vgcreate ceph-db-40gb /dev/nvme0n1p6 | vgcreate ceph-db-40gb /dev/nvme0n1p6 | ||
- | lvcreate -n db-40db -l 100%FREE ceph-db-40gb | + | lvcreate -n db-40gb -l 100%FREE ceph-db-40gb |
</code> | </code> | ||