meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
vm:proxmox:issues:update63 [2020/11/26 20:28]
niziak created
vm:proxmox:issues:update63 [2020/11/27 20:47] (current)
niziak
Line 1: Line 1:
 ====== Update 6.2 to 6.3 ====== ====== Update 6.2 to 6.3 ======
 +
 +
 +===== failed to load local private key =====
 +
 +Node is reachable from SSH but PVE services are not working correctly. Journal shows:
 +<​code>​
 +/​etc/​pve/​local/​pve-ssl.key:​ failed to load local private key (key_file or key) at /​usr/​share/​perl5/​PVE/​APIServer/​AnyEvent.pm line 1737.
 +</​code>​
 +
 +Directory `/etc/pve` is empty. No cluster FS propagated.
 +<code bash>​systemctl start pve-cluster</​code>​
 +
 +Propagte Cluster FS but still errors in journal:
 +<​code>​
 +Nov 26 19:20:13 pve5 pmxcfs[31368]:​ [main] notice: unable to acquire pmxcfs lock - trying again
 +Nov 26 19:20:13 pve5 pmxcfs[31368]:​ [main] notice: unable to acquire pmxcfs lock - trying again
 +</​code>​
 +
 +reboot solves problem :(
 +
 +But another problem with CEPH appear (see below):
 +
 +<​code>​
 +lis 26 20:07:01 pve3 kernel: libceph: mon4 (1)192.168.28.235:​6789 socket closed (con state CONNECTING)
 +lis 26 20:07:02 pve3 kernel: libceph: mon4 (1)192.168.28.235:​6789 socket closed (con state CONNECTING)
 +lis 26 20:07:03 pve3 kernel: libceph: mon4 (1)192.168.28.235:​6789 socket closed (con state CONNECTING)
 +lis 26 20:07:08 pve3 kernel: libceph: mon0 (1)192.168.28.230:​6789 socket closed (con state OPEN)
 +lis 26 20:07:13 pve3 kernel: libceph: mon2 (1)192.168.28.233:​6789 socket error on write
 +lis 26 20:07:14 pve3 kernel: libceph: mon2 (1)192.168.28.233:​6789 socket error on write
 +lis 26 20:07:15 pve3 kernel: libceph: mon2 (1)192.168.28.233:​6789 socket error on write
 +lis 26 20:07:16 pve3 kernel: libceph: mon4 (1)192.168.28.235:​6789 socket closed (con state CONNECTING)
 +lis 26 20:07:17 pve3 kernel: libceph: mon4 (1)192.168.28.235:​6789 socket closed (con state CONNECTING)
 +lis 26 20:07:18 pve3 kernel: libceph: mon4 (1)192.168.28.235:​6789 socket closed (con state CONNECTING)
 +lis 26 20:07:18 pve3 mount[7891]:​ mount error 110 = Connection timed out
 +lis 26 20:07:18 pve3 systemd[1]: mnt-pve-cephfs.mount:​ Mount process exited, code=exited,​ status=32/​n/​a
 +lis 26 20:07:18 pve3 systemd[1]: mnt-pve-cephfs.mount:​ Failed with result '​exit-code'​.
 +lis 26 20:07:18 pve3 systemd[1]: Failed to mount /​mnt/​pve/​cephfs.
 +lis 26 20:07:18 pve3 pvestatd[2923]:​ mount error: See "​systemctl status mnt-pve-cephfs.mount"​ and "​journalctl -xe" for details.
 +lis 26 20:07:18 pve3 kernel: ceph: No mds server is up or the cluster is laggy
 +lis 26 20:07:24 pve3 pvestatd[2923]:​ got timeout
 +lis 26 20:07:24 pve3 pvestatd[2923]:​ status update time (99.487 seconds)
 +</​code>​
 +
 +<code bash>
 +systemctl status ceph-{mon,​mgr}@pve5
 +● ceph-mon@pve5.service - Ceph cluster monitor daemon
 +   ​Loaded:​ loaded (/​lib/​systemd/​system/​ceph-mon@.service;​ enabled; vendor preset: enabled)
 +  Drop-In: /​usr/​lib/​systemd/​system/​ceph-mon@.service.d
 +           ​└─ceph-after-pve-cluster.conf
 +   ​Active:​ inactive (dead)
 +
 +● ceph-mgr@pve5.service - Ceph cluster manager daemon
 +   ​Loaded:​ loaded (/​lib/​systemd/​system/​ceph-mgr@.service;​ enabled; vendor preset: enabled)
 +  Drop-In: /​usr/​lib/​systemd/​system/​ceph-mgr@.service.d
 +           ​└─ceph-after-pve-cluster.conf
 +   ​Active:​ inactive (dead)
 +</​code>​
 +
 +<code bash>
 +systemctl restart ceph-mon.target
 +systemctl restart ceph-mgr.target
 +systemctl restart ceph-mds.target
 +</​code>​
 +
 +
 +    * [[https://​forum.proxmox.com/​threads/​no-gui-nor-ssh-after-upgrade-6-1-6-3-needs-manual-restart-of-services.79688/​|No GUI nor SSH after upgrade 6.1 -> 6.3 - needs manual restart of services]]
 +    * [[https://​forum.proxmox.com/​threads/​upgrade-to-proxmox-6-3-failure.79685/​|Upgrade to ProxMox 6.3 failure]]
 +
 +Yesterday evening (26.11.2020) Proxmox releases some fixes to CEPH packages:
 +
 +{{:​vm:​proxmox:​issues:​pasted:​20201127-203813.png}}