valkey icon indicating copy to clipboard operation
valkey copied to clipboard

Avoid log spam about cluster node failure detection by each primary

Open hpatro opened this issue 7 months ago • 7 comments

After node failure detection/recovery and gossip by each primary, we log about the failure detection/recovery at NOTICE level which can spam the server and the behavior is quite expensive on ec2 burstable instance types. I would prefer us rolling it back to VERBOSE level.

Change was introduced in #633

hpatro avatar Apr 26 '25 06:04 hpatro

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 71.01%. Comparing base (0b94ca6) to head (0772c2f). Report is 98 commits behind head on unstable.

Additional details and impacted files
@@             Coverage Diff              @@
##           unstable    #2010      +/-   ##
============================================
- Coverage     71.01%   71.01%   -0.01%     
============================================
  Files           123      123              
  Lines         66033    66113      +80     
============================================
+ Hits          46892    46948      +56     
- Misses        19141    19165      +24     
Files with missing lines Coverage Δ
src/cluster_legacy.c 86.46% <100.00%> (+0.37%) :arrow_up:

... and 24 files with indirect coverage changes

:rocket: New features to boost your workflow:
  • :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

codecov[bot] avatar Apr 26 '25 07:04 codecov[bot]

I don't feel strongly either way, so would appreciate input @enjoy-binbin once he is back from vacation.

madolson avatar Apr 29 '25 19:04 madolson

ok, in your environment, the nodes will frequently enter and exit the pfail/fail state? The main reason i made this change was to better track changes in node state.

enjoy-binbin avatar May 08 '25 07:05 enjoy-binbin

ok, in your environment, the nodes will frequently enter and exit the pfail/fail state? The main reason i made this change was to better track changes in node state.

My concern is with cluster of large size, this log statement becomes quite ineffective by logging for each primary's report and can cause spikes in CPU utilisation.

hpatro avatar May 12 '25 06:05 hpatro

My concern is with cluster of large size, this log statement becomes quite ineffective by logging for each primary's report and can cause spikes in CPU utilisation.

Have you already encountered this problem in a real way or is it just an issue you see in the test logs? If the node frequently pfail/fail, the log issue should be relaively less serious?

Or you are saying, a node is actually dying, it is too heavy for the cluster to print the log in each nodes (like if we have 256 shards)? I suppose that can be some cases.

Anyway, i don't have a strong opinion, lets seek other opinions. @PingXie @zuiderkwast Sorry to ping you with this small (old) change, please feel free to leave a comment.

The loglevel default is notice

# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)

enjoy-binbin avatar May 12 '25 07:05 enjoy-binbin

It's hard to guess the scenarios users will encounter. To say yes to a PR like this, I would like a more thorough explanation of the possible scenarios and the effect of this.

When a node is down in a 256-shard cluster, each node will detect that it's down and then they will also receive 254 gossips that say that it's not reachable. It can be too verbose, I guess. So in what scenarios do you get this?

  1. A node has actually crashed, is overloaded or has a network problem, etc.
  2. The admin is taking down the node on purpose, for example to upgrade it.
  3. Anything else?

And do users rely on it? I don't know. Probably it's enough to log as NOTICE when it's marked as FAIL.

zuiderkwast avatar May 12 '25 08:05 zuiderkwast

I missed that this PR exists, but I also experienced a lot of compute just going into logging. Sharing the issue here: https://github.com/valkey-io/valkey/issues/2076

sarthakaggarwal97 avatar May 13 '25 06:05 sarthakaggarwal97

Have you already encountered this problem in a real way or is it just an issue you see in the test logs? If the node frequently pfail/fail, the log issue should be relatively less serious?

Most of this is testing Valkey in large cluster setup and we want to avoid unnecessary resource utilization as much as possible. It's not about the node flip-flopping but in case of AZ failure we risk logging lot of information which is not really valuable.

Or you are saying, a node is actually dying, it is too heavy for the cluster to print the log in each nodes (like if we have 256 shards)? I suppose that can be some cases.

Yes, a single node going down is still fine. an entire AZ going down, is very expensive based on our testing.

hpatro avatar Jun 05 '25 21:06 hpatro

@enjoy-binbin / @zuiderkwast Shall we merge this and work on https://github.com/valkey-io/valkey/pull/2011 for better observability of cluster state?

hpatro avatar Jun 05 '25 21:06 hpatro