pulsar
pulsar copied to clipboard
[Bug] pulsar-proxy Liveness probe failed: HTTP probe failed with statuscode: 401
Search before asking
- [X] I searched in the issues and found nothing similar.
Version
kubernetes:1.28.4 pulsar-helm-chart:3.2.0 apachepulsar/pulsar-all:3.1.2
Minimal reproduce step
-
helm install pulsar --values charts/pulsar/values.yaml --set initialize=true --namespace pulsar pulsar-3.2.0.tgz -
kubectl get pod -n pulsar -
kubectl describe
What did you expect to see?
All pods are in running state
What did you see instead?
kubectl get pod -n pulsar output
NAME READY STATUS RESTARTS AGE
pulsar-bookie-0 1/1 Running 0 11m
pulsar-bookie-1 1/1 Running 0 11m
pulsar-bookie-2 1/1 Running 0 11m
pulsar-broker-0 1/1 Running 2 (4m30s ago) 11m
pulsar-broker-1 1/1 Running 3 (3m42s ago) 11m
pulsar-broker-2 1/1 Running 3 (3m42s ago) 11m
pulsar-proxy-0 0/1 Running 1 (1s ago) 11m
pulsar-proxy-1 0/1 Running 1 (8s ago) 11m
pulsar-proxy-2 0/1 Running 1 (9s ago) 11m
pulsar-pulsar-init-7n2jz 0/1 Completed 0 11m
pulsar-pulsar-manager-7887f99f77-cc9q2 1/1 Running 0 11m
pulsar-recovery-0 1/1 Running 0 11m
pulsar-toolset-0 1/1 Running 0 11m
pulsar-zookeeper-0 1/1 Running 0 11m
pulsar-zookeeper-1 1/1 Running 0 11m
pulsar-zookeeper-2 1/1 Running 0 11m
kubectl describe outpu:
Warning Unhealthy 3s (x7 over 63s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 401
Warning Unhealthy 3s (x8 over 63s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 401
Modified on default basis values.yaml
namespace: "pulsar"
persistence: true
volumes:
persistence: true
local_storage: true
components:
# zookeeper
zookeeper: true
# bookkeeper
bookkeeper: true
# bookkeeper - autorecovery
autorecovery: true
# broker
broker: true
# functions
functions: true
# proxy
proxy: true
# toolset
toolset: true
# pulsar manager
pulsar_manager: true
# uses chart's appVersion when unspecified
defaultPulsarImageTag: 3.1.2
auth:
authentication:
enabled: true
provider: "jwt"
jwt:
# Enable JWT authentication
# If the token is generated by a secret key, set the usingSecretKey as true.
# If the token is generated by a private key, set the usingSecretKey as false.
usingSecretKey: false
authorization:
enabled: true
superUsers:
# broker to broker communication
broker: "broker-admin"
# proxy to broker communication
proxy: "proxy-admin"
# pulsar-admin client to broker/proxy communication
client: "admin"
# omits the above proxy role from superusers on the proxy
# and configures it as a proxy role on the broker in addition to the superusers
useProxyRoles: true
zookeeper:
podMonitor:
enabled: false
nodeSelector:
node-role.kubernetes.io/pulsar: pulsar
volumes:
persistence: true
data:
name: data
size: 100Gi
local_storage: true
bookkeeper:
podMonitor:
enabled: false
nodeSelector:
node-role.kubernetes.io/pulsar: pulsar
volumes:
persistence: true
journal:
name: journal
size: 100Gi
local_storage: true
ledgers:
name: ledgers
size: 100Gi
local_storage: true
autorecovery:
podMonitor:
enabled: false
nodeSelector:
node-role.kubernetes.io/pulsar: pulsar
broker:
podMonitor:
enabled: false
nodeSelector:
node-role.kubernetes.io/pulsar: pulsar
functions:
component: functions-worker
useBookieAsStateStore: true
proxy:
podMonitor:
enabled: false
nodeSelector:
node-role.kubernetes.io/pulsar: pulsar
dashboard:
nodeSelector:
node-role.kubernetes.io/pulsar: pulsar
toolset:
nodeSelector:
node-role.kubernetes.io/pulsar: pulsar
# 关闭监控组件
kube-prometheus-stack:
enabled: false
pulsar_manager:
nodeSelector:
node-role.kubernetes.io/pulsar: pulsar
admin:
user: pulsar_manager
password: zR7yF5fH
configData:
REDIRECT_HOST: "http://127.0.0.1"
REDIRECT_PORT: "9527"
DRIVER_CLASS_NAME: org.postgresql.Driver
URL: jdbc:postgresql://postgresql-ha-pgpool:5432/pulsar_manager
LOG_LEVEL: DEBUG
JWT_TOKEN: <token>
```
### Anything else?
_No response_
### Are you willing to submit a PR?
- [X] I'm willing to submit a PR!
duplicates https://github.com/apache/pulsar-helm-chart/issues/447
Are you sure that you are using the 3.1.2 image? Already fixed in 3.0.2 and 3.1.2 by #21428