Skip to content

Instantly share code, notes, and snippets.

@jonaharagon
Last active February 22, 2025 01:31
Show Gist options
  • Save jonaharagon/65f3b9171e262b682cc4cb2c72c70da3 to your computer and use it in GitHub Desktop.
Save jonaharagon/65f3b9171e262b682cc4cb2c72c70da3 to your computer and use it in GitHub Desktop.
Ceph Cluster Benchmark
fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/sda1):
---------------------------------
Block Size | 4k (IOPS) | 64k (IOPS)
------ | --- ---- | ---- ----
Read | 92.43 MB/s (23.1k) | 790.79 MB/s (12.3k)
Write | 92.67 MB/s (23.1k) | 794.95 MB/s (12.4k)
Total | 185.10 MB/s (46.2k) | 1.58 GB/s (24.7k)
| |
Block Size | 512k (IOPS) | 1m (IOPS)
------ | --- ---- | ---- ----
Read | 1.21 GB/s (2.3k) | 1.21 GB/s (1.1k)
Write | 1.28 GB/s (2.5k) | 1.29 GB/s (1.2k)
Total | 2.50 GB/s (4.8k) | 2.50 GB/s (2.4k)
~# sudo echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync
3
~# rados bench -p testbench 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_bender_2468063
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 333 317 1267.59 1268 0.0440228 0.0491082
2 16 633 617 1233.68 1200 0.0444198 0.0465066
3 16 884 868 1157.08 1004 0.0271375 0.042762
4 16 1058 1042 1041.77 696 0.0231089 0.0413995
5 16 1328 1312 1049.37 1080 0.0256522 0.0608209
6 16 1657 1641 1093.77 1316 0.0468416 0.0575925
7 16 1963 1947 1112.34 1224 0.0312576 0.0547133
8 16 2207 2191 1095.27 976 0.0252149 0.0527245
9 16 2447 2431 1080.22 960 0.0389077 0.0590721
10 14 2775 2761 1104.18 1320 0.0525948 0.0575061
11 4 2775 2771 1007.43 40 0.0532433 0.0574472
12 4 2775 2771 923.479 0 - 0.0574472
Total time run: 12.5246
Total writes made: 2775
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 886.255
Stddev Bandwidth: 458.946
Max bandwidth (MB/sec): 1320
Min bandwidth (MB/sec): 0
Average IOPS: 221
Stddev IOPS: 114.736
Max IOPS: 330
Min IOPS: 0
Average Latency(s): 0.0613645
Stddev Latency(s): 0.220074
Max latency(s): 3.16551
Min latency(s): 0.0170968
~# rados bench -p testbench 10 seq
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 346 330 1317.51 1320 0.0129957 0.0457791
2 16 659 643 1284.66 1252 0.01609 0.0479054
3 16 1007 991 1320.34 1392 0.169129 0.0469608
4 15 1350 1335 1334.04 1376 0.0292461 0.0467108
5 15 1697 1682 1344.56 1388 0.0324939 0.0464454
6 16 2051 2035 1355.62 1412 0.044202 0.0461781
7 16 2423 2407 1374.47 1488 0.0118308 0.0455626
8 16 2764 2748 1373.13 1364 0.0284042 0.045564
Total time run: 8.08987
Total reads made: 2775
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 1372.09
Average IOPS: 343
Stddev IOPS: 17.1298
Max IOPS: 372
Min IOPS: 313
Average Latency(s): 0.0457265
Max latency(s): 0.842745
Min latency(s): 0.00694753
~# rados bench -p testbench 10 rand
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 335 319 1275.51 1276 0.0215354 0.0463097
2 16 639 623 1245.63 1216 0.0538976 0.0423329
3 16 938 922 1229.01 1196 0.0228458 0.0363778
4 16 1230 1214 1213.71 1168 0.027061 0.0513062
5 16 1576 1560 1247.71 1384 0.0111833 0.0500291
6 16 1911 1895 1262.87 1340 0.0481793 0.0481103
7 16 2240 2224 1270.44 1316 0.0109283 0.0456652
8 15 2480 2465 1232.1 964 0.0151858 0.0508388
9 15 2818 2803 1245.36 1352 0.00531213 0.0503912
10 14 3134 3120 1247.61 1268 0.0826304 0.0503952
Total time run: 10.0424
Total reads made: 3134
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 1248.31
Average IOPS: 312
Stddev IOPS: 30.5469
Max IOPS: 346
Min IOPS: 241
Average Latency(s): 0.0503998
Max latency(s): 2.96408
Min latency(s): 0.00370473
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment