meta data for this page
  •  

This is an old revision of the document!


CEPH performance

Performance tips

Ceph is build for scale and works great in large clusters. In small cluster every node will be heavily loaded.

  • adapt PG to number of OSDs to spread traffic evenly
  • use krbd
  • enable writeback on VMs (possible data loss on consumer SSDs)

performance on small cluster

  • number of PG should be power of 2 (or middle between powers of 2)
  • same utilization (% full) per device
  • same number of PG per OSD := same number of request per device
  • same number of primary PG per OSD = read operations spread evenly
    • primary PG - original/first PG - others are replicas. Primary PG is used for read.

balancer

ceph mgr module enable balancer
ceph balancer on
ceph balancer mode upmap

CRUSH reweight

If possible use balancer

Override default CRUSH assignment.

performance on slow HDDs