numaflow
numaflow copied to clipboard
when remote writing from prometheus getting error expected to write body size of -24158 but got 41378
Describe the bug In prometheus.yml i am using remote write to send metrics to a source vertex of http type. and getting this error ERROR 2024-06-19 21:54:43 {"level":"error","ts":1718814283.906287,"logger":"numaflow.Source-processor","caller":"forward/forward.go:473","msg":"Retrying failed msgs","vertex":"simple-pipeline-in","errors":{"expected to write body size of -24158 but got 41378":1},"stacktrace":"github.com/numaproj/numaflow/pkg/forward.(*InterStepDataForward).writeToBuffer\n\t/home/runner/work/numaflow/numaflow/pkg/forward/forward.go:473\ngithub.com/numaproj/numaflow/pkg/forward.(*InterStepDataForward).writeToBuffers\n\t/home/runner/work/numaflow/numaflow/pkg/forward/forward.go:428\ngithub.com/numaproj/numaflow/pkg/forward.(*InterStepDataForward).forwardAChunk\n\t/home/runner/work/numaflow/numaflow/pkg/forward/forward.go:314\ngithub.com/numaproj/numaflow/pkg/forward.(*InterStepDataForward).Start.func1\n\t/home/runner/work/numaflow/numaflow/pkg/forward/forward.go:143"}
To Reproduce Steps to reproduce the behavior:
- install numaflow, prometheus
- create interstepbufferservices
- create pipeline using pipeline yaml
- apiVersion: numaflow.numaproj.io/v1alpha1 kind: Pipeline metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"numaflow.numaproj.io/v1alpha1","kind":"Pipeline","metadata":{"annotations":{},"name":"simple-pipeline","namespace":"default"},"spec":{"edges":[{"from":"in","to":"cat"},{"from":"cat","to":"out"}],"vertices":[{"name":"in","source":{"generator":{"duration":"1s","rpu":5}}},{"name":"cat","udf":{"builtin":{"name":"cat"}}},{"name":"out","sink":{"log":{}}}]}} creationTimestamp: "2024-06-19T15:54:29Z" finalizers:
- pipeline-controller generation: 2 name: simple-pipeline namespace: default resourceVersion: "131944" uid: e25b42c7-c454-46c7-b74b-2cd98cd6b952 spec: edges:
- from: in to: cat
- from: cat to: out lifecycle: deleteGracePeriodSeconds: 30 desiredPhase: Running limits: bufferMaxLength: 30000 bufferUsageLimit: 80 readBatchSize: 500 readTimeout: 1s vertices:
- name: in source: http: service: true
- name: cat scale: {} udf: builtin: name: cat
- name: out scale: {} sink: log: {} watermark: disabled: false maxDelay: 0s status: conditions:
- lastTransitionTime: "2024-06-19T15:54:29Z" message: Successful reason: Successful status: "True" type: Configured
- lastTransitionTime: "2024-06-19T15:54:29Z" message: Successful reason: Successful status: "True" type: Deployed lastUpdated: "2024-06-19T15:54:29Z" phase: Running sinkCount: 1 sourceCount: 1 udfCount: 1 vertexCount: 3
- in prometheus config prometheus.yml, include following remote write
- . remote_write:
- name: remote-test
url: "https://simple-pipeline-in.default.svc.cluster.local:8443/vertices/in"
remote_timeout: 1m
queue_config:
capacity: 10000
min_shards: 10
max_shards: 100
max_samples_per_send: 50
batch_send_deadline: 10s
min_backoff: 30ms
max_backoff: 100ms
tls_config:
insecure_skip_verify: true
write_relabel_configs:
- action: keep
regex: cpu_*;true
source_labels:
- name
- nginx
- action: keep
regex: cpu_*;true
source_labels:
- name: remote-test
url: "https://simple-pipeline-in.default.svc.cluster.local:8443/vertices/in"
remote_timeout: 1m
queue_config:
capacity: 10000
min_shards: 10
max_shards: 100
max_samples_per_send: 50
batch_send_deadline: 10s
min_backoff: 30ms
max_backoff: 100ms
tls_config:
insecure_skip_verify: true
write_relabel_configs:
Expected behavior the source accepts the metrics fromprometheus and forwards to the cat and then to out vertices
Screenshots If applicable, add screenshots to help explain your problem.
Environment (please complete the following information):
- Kubernetes:v1.29.2
- Numaflow: quay.io/numaproj/numaflow:v0.7.2
Additional context Add any other context about the problem here.
Message from the maintainers:
Impacted by this bug? Give it a 👍. We often sort issues this way to know what to prioritize.
For quick help and support, join our slack channel.