Ceph is build for scale and works great in large clusters. In small cluster every node will be heavily loaded.
krbd
writeback
on VMs (possible data loss on consumer SSDs)Setting to 512 PG wasn't possible because limit of 250PG/OSD.
ceph mgr module enable balancer
ceph balancer on
ceph balancer mode upmap
If possible use balancer
Override default CRUSH assignment.
Better to use in warn mode, to do not put unexpected load when PG number will change.
ceph mgr module enable pg_autoscaler #ceph osd pool set <pool> pg_autoscale_mode <mode> ceph osd pool set rbd pg_autoscale_mode warn
It is possible to set desired/target size of pool. This prevents autoscaler to move data every time new data are stored.
ceph -s ceph osd df - shows standard deviation
no tools to show primary PG balancing. Tool on https://github.com/JoshSalomon/Cephalocon-2019/blob/master/pool_pgs_osd.sh