dashboard
dashboard copied to clipboard
Dashboard pod status not reported correctly for pods using native sidecar containers
What happened?
Pod status in the dashboard showing "Init: x/y" instead of "Running":
Correct pod status shown in kubectl cli:
What did you expect to happen?
Pod status should show "Running" if all containers (including native sidecar containers) are ready.
How can we reproduce it (as minimally and precisely as possible)?
- Use the example here: https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/#sidecar-example
- Observe the "Init: x/y" status in the dashboard
- Replace the deployment definition with the below - converting to a standard init container
- Observe the "Running" status in the dashboard
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: kube-system
name: myapp
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: alpine:latest
command: ['sh', '-c', 'while true; do echo "logging" >> /opt/logs.txt; sleep 1; done']
volumeMounts:
- name: data
mountPath: /opt
initContainers:
- name: logshipper
image: alpine:latest
# restartPolicy: Always
# command: ['sh', '-c', 'tail -F /opt/logs.txt']
command: ['sh', '-c', 'sleep 1']
volumeMounts:
- name: data
mountPath: /opt
volumes:
- name: data
emptyDir: {}
Anything else we need to know?
No response
What browsers are you seeing the problem on?
Microsoft Edge
Kubernetes Dashboard version
helm.sh/chart: kubernetes-dashboard-7.5.0
Kubernetes version
Client Version: v1.29.4 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.3
Dev environment
No response
Thanks for the detailed report and YAML to reproduce. It will be easier to fix. We will take care of that soon.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
wasn't this fixed in https://github.com/kubernetes/dashboard/pull/9483?
Yes but doesn't look like it's been released yet - unless I missed it.
Ye, it's been fixed. I have missed this issue, thanks for the info. Fix should be available in the latest release created a couple of days ago.
/close
@floreks: Closing this issue.
In response to this:
Ye, it's been fixed. I have missed this issue, thanks for the info. Fix should be available in the latest release created a couple of days ago.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.