====== Issues ======
=== auth: unable to find a keyring ===
It is not possible to create ceph OSD neither from WebUI nor cmdline: pveceph osd create /dev/sdc
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
stderr: 2021-01-28T10:21:24.996+0100 7fd1a848f700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.bootstrap-osd.keyring: (2) No such file or directory
2021-01-28T10:21:24.996+0100 7fd1a848f700 -1 AuthRegistry(0x7fd1a0059030) no keyring found at /etc/pve/priv/ceph.client.bootstrap-osd.keyring, disabling cephx
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[mds]
keyring = /var/lib/ceph/mds/ceph-$id/keyring
ceph.conf Variables
* **$cluster** - cluster name. For proxmox it is ''ceph''
* **$type** - daemon process ''mds'' ''osd'' ''mon''
* **$id** - daemon or client indentifier. For ''osd.0'' it is ''0''
* **$host** - hostname where the process is running
* **$name** - Expands to $type.$id. I.e: ''osd.2'' or ''client.bootstrap''
* **$pid** - Expands to daemon pid
**SOLUTION:**
cp /var/lib/ceph/bootstrap-osd/ceph.keyring /etc/pve/priv/ceph.client.bootstrap-osd.keyring
alternative to try: change ceph.conf
=== Unit -.mount is masked. ===
Running command: /usr/bin/systemctl start ceph-osd@2
stderr: Failed to start ceph-osd@2.service: Unit -.mount is masked.
--> RuntimeError: command returned non-zero exit status: 1
It was caused by ''gparted'' which wasn't correctly shutdown.
* [[https://askubuntu.com/questions/1191596/unit-mount-is-masked|Unit -.mount is masked]]
* [[https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=948739|gparted should not mask .mount units]]
* [[https://unix.stackexchange.com/questions/533933/systemd-cant-unmask-root-mount-mount/548996]]
**Solution:**
systemctl --runtime unmask -- -.mount
To list runtime masked units:
ls -l /var/run/systemd/system | grep mount | grep '/dev/null' | cut -d ' ' -f 11
To unescape systemd unit names:
systemd-escape -u 'rpool-data-basevol\x2d800\x2ddisk\x2d0.mount'
===== lock on rbd =====
root@pve1:~# rbd lock ls vm-201-disk-0
There is 1 exclusive lock on this image.
Locker ID Address
client.310904979 auto 18446462598732841336 192.168.28.237:0/4057457529
root@pve1:~# rbd lock remove vm-201-disk-0 "auto 18446462598732841336" client.310904979
===== mons are allowing insecure global_id reclaim =====
After CEPH security update, CEPH status show health warning
[[https://forum.proxmox.com/threads/ceph-nautilus-and-octopus-security-update-for-insecure-global_id-reclaim-cve-2021-20288.88038/|Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288]]
ceph config set mon auth_allow_insecure_global_id_reclaim false