Parth Arora
Parth Arora
The ceph side values are correct `TOTAL 2.8 TiB 1.1 TiB 1.0 TiB 493 MiB 16 GiB 1.7 TiB 40.42` So it is the dashboard that probably has an error,...
From your Screenshots: ``` Available Used Total Previous values: 1.51 1.23 2.73 New Values: 1.51 1.33 2.83 From ceph side 1.7 1.1 2.8 (osd df tree) (New values) ``` Which...
But if you see the ceph output the values are correct. ``` From ceph side 1.7 1.1 2.8 (osd df tree) (New values) ```
I think ceph needs to include exporter, in the logrotate file ``` sh-4.4# cat ceph /var/log/ceph/ceph-mon.a.log { rotate 7 daily maxsize 500M compress sharedscripts postrotate killall -q -1 ceph-mon ceph-mgr...
Correct thats need to be done to, But first ceph needs to update the logrotate file to kill exporter pod, @avanthakkar Or we can do it from rook to by...
For ceph related issues(running without rook), in the future, we would say open Ceph trackers.
Still a priority
We can continue with this PR https://github.com/rook/rook/pull/14465, and can have a separate setting for provisioner, DO we need this for helm charts too?