charts
charts copied to clipboard
[Bug] Can't start pulsar cluster by using helm chart hub
Reproduce
- create k8s cluster
- helm repo add streamnative https://charts.streamnative.io
- helm repo update
- kubectl create namespace pulsar
- helm install --set initialize=true function-mesh streamnative/pulsar
NAME READY STATUS RESTARTS AGE
function-mesh-pulsar-alert-manager-0 2/2 Running 0 5m52s
function-mesh-pulsar-bookie-0 0/1 Pending 0 5m52s
function-mesh-pulsar-bookie-1 0/1 Pending 0 5m52s
function-mesh-pulsar-bookie-2 0/1 Pending 0 5m51s
function-mesh-pulsar-bookie-3 0/1 Pending 0 5m50s
function-mesh-pulsar-bookie-init-bzt42 0/1 Completed 0 5m51s
function-mesh-pulsar-broker-0 0/1 Init:1/2 0 5m52s
function-mesh-pulsar-broker-1 0/1 Init:1/2 0 5m52s
function-mesh-pulsar-broker-2 0/1 Init:1/2 0 5m51s
function-mesh-pulsar-grafana-c8f575ff5-nlmvc 1/1 Running 0 5m53s
function-mesh-pulsar-node-exporter-5w42f 1/1 Running 0 5m53s
function-mesh-pulsar-node-exporter-64whq 1/1 Running 0 5m53s
function-mesh-pulsar-node-exporter-fsmv5 1/1 Running 0 5m53s
function-mesh-pulsar-prometheus-0 2/2 Running 0 5m52s
function-mesh-pulsar-proxy-0 0/1 Init:1/2 0 5m51s
function-mesh-pulsar-proxy-1 0/1 Init:1/2 0 5m51s
function-mesh-pulsar-proxy-2 0/1 Init:1/2 0 5m51s
function-mesh-pulsar-pulsar-init-xzdfq 0/1 Completed 0 5m51s
function-mesh-pulsar-pulsar-manager-0 0/1 Init:0/1 0 5m52s
function-mesh-pulsar-pulsar-manager-init-s5sch 0/1 Init:0/1 0 5m51s
function-mesh-pulsar-recovery-0 1/1 Running 0 5m52s
function-mesh-pulsar-toolset-0 1/1 Running 0 5m52s
function-mesh-pulsar-zookeeper-0 1/1 Running 0 5m52s
function-mesh-pulsar-zookeeper-1 1/1 Running 0 5m3s
function-mesh-pulsar-zookeeper-2 1/1 Running 0 4m8s
And the bk pods are always pending
Hi @wolfstudy , first let me say that i am not affiliated to pulsar or streamnative in any way.
Regarding the helm failure: You just the hit the standard helm timeout of 5 minutes, cf.:
- configure timeout on the cli: https://helm.sh/docs/intro/using_helm/#helpful-options-for-installupgraderollback
You should review if and why the pods are still in pending state (kubectl describe pods ...), cf.:
- https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application-introspection/
See below for more details on our experience. Long story short: We use a max timeout of 1500 seconds (25 min). Let me know if that helps.
Note that the pulsar chart is quite heavy with resources as it targets towards a production-ready cluster by default. You will need some strong nodes (vms), so in case your cluster auto-scales it will need some time to provision enough nodes (about 2-5 minutes each node).
Also the docker images are quite large, cf.:
- https://hub.docker.com/r/apachepulsar/pulsar-all So pulling time may contribute to the timeout as well.
One all pods can be created (not pending anymore) the release waits until all pods are ready. Usually the last ones are the bookie/proxy/broker-pods waiting for the zookeper-pods to readily identify the cluster.
A common issue we faced is that depending on the cloud provider you use (we use Azure for example), the volume claims (pvc) take also take some time to be bound, e.g. the zookeeper wants 3x50 GiB.
Close legacy issues in charts. Pls feel free to reopen the issue if this problem still exists in current version.