Joshua Schmid
Joshua Schmid
relevant #934
that's the expected behavior and part of the precaution that has to be taken during critical situations.
Martin-Weiss wrote on Wed, 28. Feb 13:16: > 1. in this case (if that is expected) it should error out instead of waiting forever and hang while waiting for all...
> This is nothing we can decide by software automatically I believe. In a large multi datacenter Cluster this is different than in a small cluster and it also might...
tackled with #1174
not implemented yet. will put this on my plate
That's _way_ easier to implement once prometheus/graf will be organized in a separate role. Adding to card.
![](https://github.trello.services/images/mini-trello-icon.png) [Update Monitoring Setup](https://trello.com/c/dADupHdP/7-update-monitoring-setup)
![](https://github.trello.services/images/mini-trello-icon.png) [Tasklist for integration tests](https://trello.com/c/Mletjdor/5-tasklist-for-integration-tests)
@Martin-Weiss We indeed do check for raidctrls one level deeper in `cephdisks.py` `salt -C 'I@roles:storage' cephdisks.list` This output is used in the proposal runner. Internally cephdisk uses hwinfo to determine...