nats-server icon indicating copy to clipboard operation
nats-server copied to clipboard

Some acknowledged messages not being deleted from stream with sources and WorkQueue retention

Open zatlodan opened this issue 1 year ago • 8 comments

Observed behavior

Stream (STREAM_B_Q) with a single consumer and retention set to WorkQueue reporting non zero message count after all messages are consumed and acknowledged by said consumer. This stream (STREAM_B_Q) is sourcing from another stream with retention set to Limits (STREAM_A).

This behavior has occurred after a large amount of data was inserted into the source stream (STREAM_A).

Some more details:

  • The stream configuration was never changed
  • The stream never had different consumers than the one described below
  • No related issues found in server logs

STREAM_A

This is the source stream into which the data were published.

Config

Subjects: STREAM_A.>
Replicas: 3
Storage: File
Retention: Limits
Acknowledgements: true
Discard Policy: Old
Duplicate Window: 5m0s
Direct Get: true
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits: Unlimited

State

Messages: 7,003,488
Bytes: 1.7 GiB
FirstSeq: 640,772 @ 2024-02-20T12:21:01 UTC
LastSeq: 7,644,259 @ 2024-02-29T10:12:15 UTC
Active Consumers: 0
Number of Subjects: 1

STREAM_B_Q

This is the WorkQueue with issues.

Config

Subjects: STREAM_B_Q.>
Replicas: 3
Storage: File
Retention: WorkQueue
Acknowledgements: true
Discard Policy: Old
Duplicate Window: 2m0s
Direct Get: true
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits: Unlimited
Sources: STREAM_A

State

Messages: 1,980  <-- This is the issue
Bytes: 636 KiB
FirstSeq: 7,318,290 @ 2024-02-23T11:13:58 UTC
LastSeq: 7,644,857 @ 2024-02-29T10:27:17 UTC
Deleted Messages: 324,588 <-- This is the issue
Active Consumers: 1
Number of Subjects: 1

Consumer

Name: stream-b-testing-consumer
Pull Mode: true
Deliver Policy: All
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Max Ack Pending: 10,000
Max Waiting Pulls: 10
Replicas: 3
Last Delivered Message: Consumer sequence: 7,645,772 Stream sequence: 7,644,857 Last delivery: 9m19s ago
Acknowledgment floor: Consumer sequence: 7,645,772 Stream sequence: 7,644,857 Last Ack: 8m49s ago
Outstanding Acks: 0 out of maximum 10,000
Redelivered Messages: 0
Unprocessed Messages: 0
Waiting Pulls: 1 of maximum 10

View from metrics This is the jetstream_stream_total_messages metric on the stream STREAM_B_Q in the time the issue has arrisen. You can see 0 messages in the stream before the published bulk and 1980 after.

image

Cluster info 4 nodes, all same version and HW specs, same private network. No leaf nodes connected.

Expected behavior

All messages are removed from the stream after acknowledgement. Stream reporting 0 total messages.

Server and client version

Server: Version: 2.10.5 Git Commit: 0883d32 Go Version: go1.21.4

Consuming JS client: https://www.npmjs.com/package/nats Version: 2.15.1

CLI used to check Version: 0.0.35

Host environment

No response

Steps to reproduce

The issue is quite flaky and occurs randomly throughout the month. But it seems to be triggered by sudden spikes in published data in the source stream.

  1. Setup 3 or 4 node cluster
  2. Create streams with the configuration described in the "observed behavior" section
  3. Publish a large batch of data into the source stream
  4. Wait for consumer to consume messages
  5. Observe issue in the work queue stream

zatlodan avatar Feb 29 '24 11:02 zatlodan

Thanks for the report. Best for you to upgrade to latest patch version, 2.10.11. If issue persists let us know.

derekcollison avatar Feb 29 '24 19:02 derekcollison

Will leave open for now.

derekcollison avatar Feb 29 '24 19:02 derekcollison

I was facing this with nats-server version 2.9.25 after upgrading to version 2.10.12 issue resolved.

zakk616 avatar Mar 19 '24 12:03 zakk616

I was facing this with nats-server version 2.9.25 after upgrading to version 2.10.12 issue resolved.

Thanks for the reply. We will be updating NATS on our prod environment this week, I will post an update as soon I can.

zatlodan avatar Mar 19 '24 12:03 zatlodan

We have updated all our NATS server environments to version 2.10.12.

We have cleared the affected streams of any messages and recreated the consumers.

The issue is still there, but different. A week of monitoring and we have a hanging message in 3 of our 10 streams. Currently its just a single message.

The difference now is that only one of the instances see the message being stuck. In some cases its the leader of the stream/consumer and in some cases its not.

zatlodan avatar Apr 17 '24 16:04 zatlodan

I think https://github.com/nats-io/nats-server/pull/5270 fixes and is available in 2.10.14

vigith avatar Apr 17 '24 16:04 vigith

Thanks for the update @zatlodan, that is a condition that we were able to reproduce and was addressed in the v2.10.14 release from last week.

wallyqs avatar Apr 17 '24 16:04 wallyqs

Okay, thank you for the response, we will update to 2.10.14 and will let you know.

zatlodan avatar Apr 18 '24 15:04 zatlodan

Seems that the issue is no longer present after the update to 2.10.14.

Thank you all for help and I will now close this issue :+1:

zatlodan avatar May 15 '24 21:05 zatlodan