Table of Contents

CEPH performance

Performance tips

Ceph is build for scale and works great in large clusters. In small cluster every node will be heavily loaded.

performance on small cluster

balancer

ceph mgr module enable balancer
ceph balancer on
ceph balancer mode upmap

CRUSH reweight

If possible use balancer

Override default CRUSH assignment.

PG autoscaler

Better to use in warn mode, to do not put unexpected load when PG number will change.

ceph mgr module enable pg_autoscaler
#ceph osd pool set <pool> pg_autoscale_mode <mode>
ceph osd pool set rbd pg_autoscale_mode warn

It is possible to set desired/target size of pool. This prevents autoscaler to move data every time new data are stored.

check cluster balance

ceph -s ceph osd df - shows standard deviation

no tools to show primary PG balancing. Tool on https://github.com/JoshSalomon/Cephalocon-2019/blob/master/pool_pgs_osd.sh

performance on slow HDDs