meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
vm:proxmox:ceph:performance [2024/05/17 19:10]
niziak
vm:proxmox:ceph:performance [2024/05/19 10:39] (current)
niziak
Line 23: Line 23:
   * same number of primary PG per OSD = read operations spread evenly   * same number of primary PG per OSD = read operations spread evenly
     * primary PG - original/​first PG - others are replicas. Primary PG is used for read.     * primary PG - original/​first PG - others are replicas. Primary PG is used for read.
 +  * use relatively more PG than for big cluster - better balance, but handling PGs consumes resources (RAM)
 +    * i.e. for 7 OSD x 2TB PG autoscaler recommends 256 PG. After changing to 384 IOops drastivally increases and latency drops.
 +      Setting to 512 PG wasn't possible because limit of 250PG/OSD.
  
 === balancer === === balancer ===
Line 49: Line 52:
  
 It is possible to set desired/​target size of pool. This prevents autoscaler to move data every time new data are stored. It is possible to set desired/​target size of pool. This prevents autoscaler to move data every time new data are stored.
 +
 +==== check cluster balance ====
 +
 +ceph -s
 +ceph osd df - shows standard deviation
 +
 +no tools to show primary PG balancing. Tool on https://​github.com/​JoshSalomon/​Cephalocon-2019/​blob/​master/​pool_pgs_osd.sh
 +
  
 ==== performance on slow HDDs ==== ==== performance on slow HDDs ====