Web9. okt 2024 · admin. 2,511 Posts. October 8, 2024, 9:14 pm. Not too alarming, some options: 1-ignore the warning. 2-add approx 20% more osds. 3-from the Ceph Configuration menu … Web25. feb 2024 · pools: 10 (created by rados) pgs per pool: 128 (recommended in docs) osds: 4 (2 per site) 10 * 128 / 4 = 320 pgs per osd. This ~320 could be a number of pgs per osd …
Bug #46338: Inconsistency with pg number for pool creattion in …
Web4. nov 2024 · Still have the warning of "too many PGs per OSD (357 > max 300)" Also noticed the number of PG's are now "2024" instead of the usual "1024" even though the. … bj\u0027s willowbrook mall
ceph故障:too many PGs per OSD - 波神 - 博客园
Web14. mar 2024 · According to the Ceph documentation, 100 PGs per OSD is the optimal amount to aim for. With this in mind, we can use the following calculation to work out how … Web19. nov 2024 · # ceph -s cluster 1d64ac80-21be-430e-98a8-b4d8aeb18560 health HEALTH_WARN <-- 报错的地方 too many PGs per OSD (912 > max 300) monmap e1: 1 mons at {node1=109.105.115.67:6789/0} election epoch 4, quorum 0 node1 osdmap e49: 2 osds: 2 up, 2 in flags sortbitwise,require_jewel_osds pgmap v1256: 912 pgs, 23 pools, 4503 bytes … Webtoo many PGs per OSD (2549 > max 200) ^^^^^ This is the issue. A temp workaround will be to bump the hard_ratio and perhaps restart the OSDs after (or add a ton of OSDs so the … dat math section