node-problem-detector icon indicating copy to clipboard operation
node-problem-detector copied to clipboard

Eliminate logspam using filelog monitor

Open skaven81 opened this issue 9 months ago • 8 comments

When using a filelog monitor, the top-level pluginConfig.message filter provides a limited stream of log events to be checked against the rules[].pattern. I thus expect it to be normal for most log events to fail to match the top-level pluginConfig.message filter. Such a condition should not trigger a warning-level log message. And in fact such log messages should be suppressed by default, unless -v=5 or higher is used for troubleshooting/debug purposes.

See also #1038

As discussed in the comments below, the original intent was that the pluginConfig.message configuration setting for logMonitors was expected to match all of the "expected" log events in the log file, and that the subsequent rules[].pattern regexes then differentiate between different types of failure modes in the log stream. Thus, the warning message that I've proposed changing to "info" level and suppressing in normal operation, was expected to only appear in cases where unexpected log events show up in the log stream.

But this causes problems when trying to detect node problems in a log file that has a wide array of event messages. Because the last pluginConfig.message regex capture group is what is used in node condition and Event message fields, it is only possible to include details about a single failure mode in a given logMonitor configuration. When viewed through this lens, the logMonitor architecture actually works quite well:

  • Each logMonitor JSON file represents a single node failure mode
  • Each logMonitor JSON file has its pluginConfig.message regex configured to isolate the class of log messages in the log file that are related to the failure mode, then use the last capture group to extract a diagnostic message to be included with the status condition or Event
  • The rules list in the logMonitor JSON enumerate the various sub-classes of the failure mode, with some perhaps generating permanent node conditions, while others generate temporary Events.

The problem with using logMonitors in this way, is that using pluginConfig.message to filter out basically all of the log messages in the log file, means every log event generates a warning message due to pluginConfig.message regex not matching. This creates a massive amount of unnecessary logspam that can quickly fill up container log partitions and costs a fortune in enterprise log management platforms like Splunk and Datadog.

This PR makes the most direct and least-impact approach to resolving this problem, by simply making the Warning() level alert downgraded to Info() and preventing it from being emitted unless the NPD is executed with a non-default verbosity setting.

It is notable that this PR would not be required if a more comprehensive rework of the logMonitor message capture was performed. If the node condition and Event message was instead captured from the rules[].pattern regex instead of the pluginConfig.message regex, then logMonitors could remain configured with a broad capture mode that matches all or nearly all of the log messages in the file.

However, even with such a rework, I would still advise that the "log message doesn't match pluginConfig.message regex" alert message still be suppressed unless the administrator is explicitly debugging/troubleshooting NPD, as even in the case where the node condition/event message is captured from rules[].pattern (which would be my preference), I would still argue that many logMonitors would want to filter the log stream down to a known set of input strings that are then matched against the rule patterns. This allows the rule patterns to be simpler and easier to maintain, because they only have to match against a pre-filtered set of log events.

The specific use-case where this logspam problem originated is actually in one of the included sample log monitors: https://github.com/kubernetes/node-problem-detector/blob/master/config/disk-log-message-filelog.json - observe that pluginConfig.message only matches the log messages in /var/log/messages that actually match a failure condition. ALL other messages are filtered out (and thus trigger the warning log that I've modified in this PR).

skaven81 avatar Feb 27 '25 00:02 skaven81

Welcome @skaven81!

It looks like this is your first PR to kubernetes/node-problem-detector 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/node-problem-detector has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. :smiley:

k8s-ci-robot avatar Feb 27 '25 00:02 k8s-ci-robot

Hi @skaven81. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Feb 27 '25 00:02 k8s-ci-robot

/ok-to-test

hakman avatar Mar 04 '25 07:03 hakman

@wangzhen127 I am not sure if -v=5 the exact value, but I agree with the approach. Ref: https://github.com/kubernetes/node-problem-detector/issues/945 /lgtm

hakman avatar Mar 08 '25 15:03 hakman

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: hakman, skaven81 Once this PR has been reviewed and has the lgtm label, please assign random-liu for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment

k8s-ci-robot avatar Mar 08 '25 15:03 k8s-ci-robot

/cc @wangzhen127 /assign @wangzhen127

hakman avatar Mar 10 '25 21:03 hakman

CC @nikhil-bhat

hakman avatar Mar 11 '25 11:03 hakman

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 09 '25 17:06 k8s-triage-robot

/remove-lifecycle stale

skaven81 avatar Jun 09 '25 18:06 skaven81

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 07 '25 18:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Oct 07 '25 19:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Nov 06 '25 19:11 k8s-triage-robot

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Nov 06 '25 19:11 k8s-ci-robot