kubeapps
kubeapps copied to clipboard
Get error in local environment ,
Describe the bug A clear and concise description of what the bug is.
To Reproduce Steps to reproduce the behavior:
- Go to '...'
- Click on '....'
- Scroll down to '....'
- See error
Expected behavior A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
- Version [e.g. 2.4.4]
- Kubernetes version [e.g. 1.21.10]
- Package version [e.g. Helm 3.2, carvel-imgpkg 0.28.0]
Additional context Add any other context about the problem here.
There is no way to use it at all. I need to refresh all the time, and suddenly it times out.
Hi @s5364733 . Can you watch the output of kubectl -n kubeapps get pods
while you are experiencing those issues? A 502 bad gateway usually means that the frontend (nginx) is unable to forward the requests because the service it's trying to forward to is unavailable. Most likely the kubeapps-apis service is being constantly rebooted due to a lack of resources (not enough memory available, being the most obvious).
Note that if you are running this locally, you need a decent amount of grunt on your local machine (32GB ram), and even then, use the site/content/docs/latest/reference/manifests/kubeapps-local-dev-values.yaml
to ensure you only have one of each service running (in prod you'd want multiple, but on a local development environment, it's too much for the one machine).
Hi @s5364733 . Can you watch the output of
kubectl -n kubeapps get pods
while you are experiencing those issues? A 502 bad gateway usually means that the frontend (nginx) is unable to forward the requests because the service it's trying to forward to is unavailable. Most likely the kubeapps-apis service is being constantly rebooted due to a lack of resources (not enough memory available, being the most obvious).Note that if you are running this locally, you need a decent amount of grunt on your local machine (32GB ram), and even then, use the
site/content/docs/latest/reference/manifests/kubeapps-local-dev-values.yaml
to ensure you only have one of each service running (in prod you'd want multiple, but on a local development environment, it's too much for the one machine).
in prod you'd want multiple, but on a local development environment, it's too much for the one machine
All services are started normally
Thanks for the extra info.
in prod you'd want multiple, but on a local development environment, it's too much for the one machine
All services are started normally
Not according to both your screenshots? You've got two kubeapps-internal-kubeappsapis-*
pods and one postgresql pod, all showing the "Running" state, but none of which is showing ready? (all show 0/1 ready)
This means that none of those pods are in a ready state (failing their ready checks). It would be worth taking a look at the k8s documentation for how to get more info about the ready checks. My guess, as before (and strengthened by the number of restarts you are seeing) is that there's just not enough resources (memory, EDIT: or CPU) available and so k8s keeps killing pods to start others etc. (for eg., your postgresql pod has 81 restarts in the 23 hours, over 3 per hour).
Note: free -mh
shows you how much free mem you've got on your machine, not how much you've allocated for use with docker necessarily (varies based on system).
As mentioned earlier, I would use the local values that I pointed to so that you're not running two pods when you only need one.
You might find that the output of kubectl --namespace kubeapps describe pod kubeapps-postgresql-0
gives you more info about why the pod is not ready (see the checks). Or paste it here, with the logs for that pod, out of interest. Cheers.
Thanks for the extra info.
in prod you'd want multiple, but on a local development environment, it's too much for the one machine
All services are started normally
Not according to both your screenshots? You've got two
kubeapps-internal-kubeappsapis-*
pods and one postgresql pod, all showing the "Running" state, but none of which is showing ready? (all show 0/1 ready)This means that none of those pods are in a ready state (failing their ready checks). It would be worth taking a look at the k8s documentation for how to get more info about the ready checks. My guess, as before (and strengthened by the number of restarts you are seeing) is that there's just not enough resources (memory, EDIT: or CPU) available and so k8s keeps killing pods to start others etc. (for eg., your postgresql pod has 81 restarts in the 23 hours, over 3 per hour).
Note:
free -mh
shows you how much free mem you've got on your machine, not how much you've allocated for use with docker necessarily (varies based on system).As mentioned earlier, I would use the local values that I pointed to so that you're not running two pods when you only need one.
You might find that the output of
kubectl --namespace kubeapps describe pod kubeapps-postgresql-0
gives you more info about why the pod is not ready (see the checks). Or paste it here, with the logs for that pod, out of interest. Cheers.
Name: kubeapps-postgresql-0
Namespace: kubeapps
Priority: 0
Service Account: default
Node: minikube/192.168.39.33
Start Time: Tue, 19 Sep 2023 11:03:12 +0800
Labels: app.kubernetes.io/component=primary
app.kubernetes.io/instance=kubeapps
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=postgresql
controller-revision-hash=kubeapps-postgresql-76c6bbd8c9
helm.sh/chart=postgresql-12.10.0
statefulset.kubernetes.io/pod-name=kubeapps-postgresql-0
Annotations:
Warning BackOff 33m (x104 over 77m) kubelet Back-off restarting failed container postgresql in pod kubeapps-postgresql-0_kubeapps(fa0182c9-e300-4311-88c0-f5ceccfce91b) Warning Unhealthy 23m (x435 over 168m) kubelet Readiness probe failed: command "/bin/sh -c -e exec pg_isready -U "postgres" -d "dbname=assets" -h 127.0.0.1 -p 5432\n[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]\n" timed out Warning Unhealthy 18m (x156 over 164m) kubelet Liveness probe failed: command "/bin/sh -c exec pg_isready -U "postgres" -d "dbname=assets" -h 127.0.0.1 -p 5432" timed out Warning Unhealthy 13m (x5 over 58m) kubelet Readiness probe failed: cannot exec in a stopped state: unknown
How can I allocate more memory to docker? Or do you have any better solution to solve this problem?
docker info : Client: Version: 24.0.5 Context: default Debug Mode: false
Server: Containers: 9 Running: 2 Paused: 0 Stopped: 7 Images: 13 Server Version: 24.0.5 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Using metacopy: false Native Overlay Diff: true userxattr: false Logging Driver: syslog Cgroup Driver: cgroupfs Cgroup Version: 1 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 runc Default Runtime: runc Init Binary: docker-init containerd version: runc version: init version: Security Options: seccomp Profile: builtin Kernel Version: 5.15.90.1-microsoft-standard-WSL2 Operating System: Ubuntu 20.04.6 LTS OSType: linux Architecture: x86_64 CPUs: 10 Total Memory: 15.28GiB Name: Jackliang ID: 26d9706e-e3b4-437d-b407-86cbc2782ff9 Docker Root Dir: /var/lib/docker Debug Mode: false Experimental: false Insecure Registries: 127.0.0.0/8 Registry Mirrors: https://ung2thfc.mirror.aliyuncs.com/ Live Restore Enabled: false
Thanks for the extra info.
in prod you'd want multiple, but on a local development environment, it's too much for the one machine
All services are started normally
Not according to both your screenshots? You've got two
kubeapps-internal-kubeappsapis-*
pods and one postgresql pod, all showing the "Running" state, but none of which is showing ready? (all show 0/1 ready) This means that none of those pods are in a ready state (failing their ready checks). It would be worth taking a look at the k8s documentation for how to get more info about the ready checks. My guess, as before (and strengthened by the number of restarts you are seeing) is that there's just not enough resources (memory, EDIT: or CPU) available and so k8s keeps killing pods to start others etc. (for eg., your postgresql pod has 81 restarts in the 23 hours, over 3 per hour). Note:free -mh
shows you how much free mem you've got on your machine, not how much you've allocated for use with docker necessarily (varies based on system). As mentioned earlier, I would use the local values that I pointed to so that you're not running two pods when you only need one. You might find that the output ofkubectl --namespace kubeapps describe pod kubeapps-postgresql-0
gives you more info about why the pod is not ready (see the checks). Or paste it here, with the logs for that pod, out of interest. Cheers.Name: kubeapps-postgresql-0 Namespace: kubeapps Priority: 0 Service Account: default Node: minikube/192.168.39.33 Start Time: Tue, 19 Sep 2023 11:03:12 +0800 Labels: app.kubernetes.io/component=primary app.kubernetes.io/instance=kubeapps app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=postgresql controller-revision-hash=kubeapps-postgresql-76c6bbd8c9 helm.sh/chart=postgresql-12.10.0 statefulset.kubernetes.io/pod-name=kubeapps-postgresql-0 Annotations: Status: Running IP: 10.244.0.235 IPs: IP: 10.244.0.235 Controlled By: StatefulSet/kubeapps-postgresql Containers: postgresql: Container ID: docker://191c5ac02958972248103e3fdd1ad3a3cde3a8ca74acc08873ae0e21b7f76b76 Image: docker.io/bitnami/postgresql:15.4.0-debian-11-r10 Image ID: docker-pullable://bitnami/postgresql@sha256:86c140fd5df7eeb3d8ca78ce4503fcaaf0ff7d2e10af17aa424db7e8a5ae8734 Port: 5432/TCP Host Port: 0/TCP SeccompProfile: RuntimeDefault State: Running Started: Wed, 20 Sep 2023 12:16:45 +0800 Last State: Terminated Reason: Error Exit Code: 137 Started: Wed, 20 Sep 2023 12:14:22 +0800 Finished: Wed, 20 Sep 2023 12:16:35 +0800 Ready: True Restart Count: 97 Requests: cpu: 250m memory: 256Mi Liveness: exec [/bin/sh -c exec pg_isready -U "postgres" -d "dbname=assets" -h 127.0.0.1 -p 5432] delay=30s timeout=5s period=10s #success=1 #failure=6 Readiness: exec [/bin/sh -c -e exec pg_isready -U "postgres" -d "dbname=assets" -h 127.0.0.1 -p 5432 [ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ] ] delay=5s timeout=5s period=10s #success=1 #failure=6 Environment: BITNAMI_DEBUG: false POSTGRESQL_PORT_NUMBER: 5432 POSTGRESQL_VOLUME_DIR: /bitnami/postgresql PGDATA: /bitnami/postgresql/data POSTGRES_PASSWORD: <set to the key 'postgres-password' in secret 'kubeapps-postgresql'> Optional: false POSTGRES_DATABASE: assets POSTGRESQL_ENABLE_LDAP: no POSTGRESQL_ENABLE_TLS: no POSTGRESQL_LOG_HOSTNAME: false POSTGRESQL_LOG_CONNECTIONS: false POSTGRESQL_LOG_DISCONNECTIONS: false POSTGRESQL_PGAUDIT_LOG_CATALOG: off POSTGRESQL_CLIENT_MIN_MESSAGES: error POSTGRESQL_SHARED_PRELOAD_LIBRARIES: pgaudit Mounts: /dev/shm from dshm (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9vfpw (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: dshm: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory SizeLimit: data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: kube-api-access-9vfpw: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message
Warning BackOff 33m (x104 over 77m) kubelet Back-off restarting failed container postgresql in pod kubeapps-postgresql-0_kubeapps(fa0182c9-e300-4311-88c0-f5ceccfce91b) Warning Unhealthy 23m (x435 over 168m) kubelet Readiness probe failed: command "/bin/sh -c -e exec pg_isready -U "postgres" -d "dbname=assets" -h 127.0.0.1 -p 5432\n[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]\n" timed out Warning Unhealthy 18m (x156 over 164m) kubelet Liveness probe failed: command "/bin/sh -c exec pg_isready -U "postgres" -d "dbname=assets" -h 127.0.0.1 -p 5432" timed out Warning Unhealthy 13m (x5 over 58m) kubelet Readiness probe failed: cannot exec in a stopped state: unknown
Looks like a network problem?
Can't reproduce it locally, did you manage to get it sorted out finally?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.