node-problem-detector
node-problem-detector copied to clipboard
Gauge for FilesystemIsReadOnly not downgraded to 0 after fixing the problem
The problem occurred when filesystem went to read only mode. That was fixed, but still in the metrics I was able to see the counter and gauge set up to 1. I conducted a test and multiple times injected the FileSystemIsReadOnly to the /dev/kmsg (https://github.com/kubernetes/node-problem-detector/blob/master/config/kernel-monitor.json):
1 log_monitor.go:160] New status generated: &{Source:kernel-monitor Events:[{Severity:info Timestamp:2020-10-08 06:44:16.09315274 +0000 UTC m=+1331754.148888064 Reason:FilesystemIsReadOnly Message:Node condition ReadonlyFilesystem is now: True, reason: FilesystemIsReadOnly}] Conditions:[{Type:KernelDeadlock Status:False Transition:2020-09-22 20:48:21.98500453 +0000 UTC m=+0.040739839 Reason:KernelHasNoDeadlock Message:kernel has no deadlock} {Type:ReadonlyFilesystem Status:True Transition:2020-10-08 06:44:16.09315274 +0000 UTC m=+1331754.148888064 Reason:FilesystemIsReadOnly Message:Remounting filesystem read-only}]}
Still the metrics were shown as 1 and it did not downgraded to 0. Even the the issue with ro filesystem was fixed, still the metric was 1:
problem_counter{reason="FilesystemIsReadOnly"} 1 problem_gauge{reason="FilesystemIsReadOnly",type="ReadonlyFilesystem"} 1
As a workaround the pod was deleted and after that metrics were reset to 0. What is the reason of that behaviour? The type "permanent"? Is deleting a pod the only solution?
kernel-monitor.json
{
"type": "permanent",
"condition": "ReadonlyFilesystem",
"reason": "FilesystemIsReadOnly",
"pattern": "Remounting filesystem read-only"
}
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
We are also facing a similar issue but with many occurences as we are extensively using PVC with GCP disks. With a ot of mounting/unmounting operations, the kernel catches many readonly disks events (not on the node root disk) and consequently node-problem-detector set the node as not ready. We may also find a more precise pattern in kernel-monitor.json to only catch root filesystem events.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-contributor-experience at kubernetes/community. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-contributor-experience at kubernetes/community. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This looks like a long standing bug that is still happening. Any suggestions here?
/remove-lifecycle rotten
/reopen
@wangzhen127: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Are we deploying the NPD as a Linux daemon or a privileged container?
On GKE, it is deployed as linux daemon
@sharonosbourne do you remember if your issue was due to read-only filesystem in non-boot disk?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale