livenessprobe
livenessprobe copied to clipboard
use nonroot user in Dockerfile
there is a security warning produced by twistlock that this livenessprobe image should use nonroot user, while if I change this Dockerfile as following (https://github.com/andyzhangx/livenessprobe/commit/42fc3281e9343eebbf300a6a42394f8815d0c13c), liveness probe would failed finally, not sure what's the right fix to make this image use nonroot user:
FROM gcr.io/distroless/static:nonroot
LABEL maintainers="Kubernetes Authors"
LABEL description="CSI Driver liveness probe"
ARG binary=./bin/livenessprobe
COPY ${binary} /livenessprobe
USER nonroot:nonroot
ENTRYPOINT ["/livenessprobe"]
failed events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m23s default-scheduler Successfully assigned kube-system/csi-smb-node-z54xp to aks-agentpool-90924120-vmss000006
Normal Pulling 3m23s kubelet Pulling image "andyzhangx/livenessprobe:v2.12.0"
Normal Created 3m22s kubelet Created container node-driver-registrar
Normal Created 3m22s kubelet Created container liveness-probe
Normal Started 3m22s kubelet Started container liveness-probe
Normal Pulled 3m22s kubelet Container image "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1" already present on machine
Normal Pulled 3m22s kubelet Successfully pulled image "andyzhangx/livenessprobe:v2.12.0" in 858.088069ms (858.101669ms including waiting)
Normal Started 3m22s kubelet Started container node-driver-registrar
Warning Unhealthy 23s (x5 over 2m23s) kubelet Liveness probe failed: Get "http://10.224.0.255:29643/healthz": dial tcp 10.224.0.255:29643: connect: connection refused
Normal Killing 23s kubelet Container smb failed liveness probe, will be restarted
Normal Pulled 22s (x2 over 3m22s) kubelet Container image "gcr.io/k8s-staging-sig-storage/smbplugin:canary" already present on machine
Normal Created 22s (x2 over 3m22s) kubelet Created container smb
Normal Started 22s (x2 over 3m22s) kubelet Started container smb
root@andydev:~/go/src/github.com/kubernetes-csi/livenessprobe# k logs csi-smb-node-z54xp -n kube-system liveness-probe
W0206 13:22:14.691040 1 connection.go:234] Still connecting to unix:///csi/csi.sock
W0206 13:22:24.690443 1 connection.go:234] Still connecting to unix:///csi/csi.sock
W0206 13:22:34.691048 1 connection.go:234] Still connecting to unix:///csi/csi.sock
W0206 13:22:44.691010 1 connection.go:234] Still connecting to unix:///csi/csi.sock
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Please check how the CSI driver exposes its socket - I guess nonroot
user cannot access it. I am open for suggestions, I just don't want to make the socket accessible to anyone on the system.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.