managed-upgrade-operator
managed-upgrade-operator copied to clipboard
demonstrate how to check if a node is stuck draining
Adds a StuckNode condition on the ManageUpgradeOperator. This is slightly more than simply: has it taken a while. It does the following
- have I observed the same node draining for more than a period of time
- if I have, report the following
- report all still running pods with very long termination grace period seconds
- report all still running pods protected by PDBs that are preventing eviction and indication which PDBs are preventing eviction.
We have choices about whether to report pods with long termination grace period second under StuckNode or report them under a different condition since we know they will eventually be cleaned up.
Assuming we agree on the utility of change, things left to do:
- unit tests. Probably easiest to drive based on must-gather loading using https://github.com/openshift/library-go/tree/master/pkg/manifestclienttest so that any bugs can easily have test cases added for them
- wiring into the running logic based on a 10 minute clock. It's currently just a separate POC.
- education on server-side apply, plus adding a library similar to the library-go diff logic to allow a dynamic client to serialize and set this status.
Codecov Report
:x: Patch coverage is 0% with 151 lines in your changes missing coverage. Please review.
:white_check_mark: Project coverage is 52.54%. Comparing base (14278e8) to head (f678772).
:warning: Report is 54 commits behind head on master.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| ...ntrollers/stucknodecondition/stuck_node_checker.go | 0.00% | 151 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## master #514 +/- ##
==========================================
- Coverage 54.22% 52.54% -1.68%
==========================================
Files 123 124 +1
Lines 6124 6491 +367
==========================================
+ Hits 3321 3411 +90
- Misses 2599 2867 +268
- Partials 204 213 +9
| Files with missing lines | Coverage Δ | |
|---|---|---|
| api/v1alpha1/upgradeconfig_types.go | 54.16% <ø> (ø) |
|
| ...ntrollers/stucknodecondition/stuck_node_checker.go | 0.00% <0.00%> (ø) |
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: a7vicky, deads2k
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [a7vicky]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
@deads2k: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| ci/prow/validate | f678772766f505a967cdc6a2817e5635f648502d | link | true | /test validate |
| ci/prow/lint | f678772766f505a967cdc6a2817e5635f648502d | link | true | /test lint |
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closed this PR.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen. Mark the issue as fresh by commenting/remove-lifecycle rotten. Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.