ceph -s ceph -w ceph df ceph osd tree ceph osd df tree
rados df
Where:
rbd ls rbd du
ceph iostat ceph osd perf rbd perf image iostat # and wait 30 sec rbd perf image iotop # and wait 30 sec rbd du
Hints from https://yourcmc.ru/wiki/Ceph_performance#Test_your_Ceph_cluster:
ceph osd pool create test rados -p test bench 10 write --no-cleanup rados -p test bench 10 seq rados -p test bench -t 4 10 seq rados -p test bench 10 rand rados bench -p test 60 write -b 4M -t 16 --no-cleanup rados bench -p test 60 seq -t 16 rados bench -p test 60 rand -t 16 rados -p test cleanup
Chapter 9. Benchmarking Performance
# Image 1G: rbd create test/myimage --size 1024 #TBD ... #rbd device map test/myimage --cluster #rbd device unmap /dev/rbdX rbd rm test/myimage