Kyle Cooke

Results 18 comments of Kyle Cooke

Ive just seen the same issue happen to a vertex in one of our pipelines recreating the pod did not resolve it. Eventually removing and recreating the whole pipeline resolved...

``` config: jetstream: url: nats://isbsvc-default-js-svc.preview.svc:4222 auth: ******** streamConfig: | consumer: ackwait: 60s maxackpending: 25000 otbucket: history: 1 maxbytes: 0 maxvaluesize: 0 replicas: 3 storage: 0 ttl: 3h procbucket: history: 1...

``` apiVersion: numaflow.numaproj.io/v1alpha1 kind: InterStepBufferService metadata: name: default spec: jetstream: version: 2.10.17 persistence: volumeSize: 3Gi ``` We are currently on numaflow `v1.5.0-rc5`

Possibly related was this log from the isbsvc `[7] 2025/06/05 12:54:33.993698 [WRN] JetStream request queue has high pending count: 18062`. We are seeing a lot of these with the number...

* TPS? * payload size when entering pipeline < 300 bytes but cant see a metric exposed for what they are inside numaflow I would expect them to be bigger...

current message rate of our whole nats server (we use jetstream for our sources) is less than 2 messsages/second (averaged across a minute) that will include inputs and outputs to...

is rc5 not latest? Yeah that message was getting into the millions I think something is thrashing the isbsvc trying to create streams and consumers and its that thats causing...

If you could provide some guidance on limiting CPU and memory of the isbsvc we could try making a separate service for each pr