Jan Fajerski

Results 36 comments of Jan Fajerski

hmm I just saw this: I ran `salt '*' cmd.run 'for d in b c d e f; do ceph-volume lvm zap --destroy /dev/vd$d; done'` from the salt master and...

Here is the issue ``` [2019-09-17 11:01:39,104][ceph_volume][ERROR ] exception caught by decorator Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 148,...

> so, it would seem, that in the CI tests which are now passing with the tmp fix, the OSDs are being removed but the underlying disk is not really...

ok I can confirm that the ceph build fixes this issue. However there still seems to somehting up with purge, where the OSDs are not stopped, but once they are...

hmm wondering if we should actually do this. We might still want to monitor the cluster when purging/recreating. We could add another step, though I can see the downside of...

The scrape targets should be updated according to your new setup when you run stage 2. I'd actually be interested in how that goes. If you don't want to mess...

The more I think about this the less I like the idea to remove grafana/prometheus in purge. If a user intends to re-deploy they will want to monitor. How about...

Does stage 5 produce any error messages? Do you have a stage 5 output?

Ok so this sounds like a bug to me. The mds should not have capabilities to change its own settings. We want to limit this to the admin keyring(s).

So I think the problem is that ceph reports IOP/s as a running average not as a current value. This does not work well with prometheus and grafana. At the...