autoscaler
autoscaler copied to clipboard
[CA] [AWS examples] Add container securityContext
Hello,
Since some security parameters aren't configurable in PodSecurityContext I introduced a new container Security Context to add security best practices.
- container is now immutable (file integrity guaranteed): readOnlyRootFilesystem
- remove rights the application doesn't need: allowPrivilegeEscalation+capabilities
This follow the least privilege principle in security. Configuration recommanded by scanners such as kubescape.
Tested in my EKS 1.21 cluster with cluster-autoscaler-autodiscover.yaml
using container image v1.22. I don't see anything wrong in the logs.
The committers are authorized under a signed CLA.
- :white_check_mark: Damien Léger (c9ae6c70052b9403cb7aea3adc6d8cc691bcbe57)
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: damienleger
To complete the pull request process, please assign jeffwan after the PR has been reviewed.
You can assign the PR to them by writing /assign @jeffwan
in a comment when ready.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
/area provider/aws
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
For PSS/restricted, a seccompProfile type RuntimeDefault is also required.
https://kubernetes.io/docs/concepts/security/pod-security-standards/
I suggest you run kyverno apply against the manifest to check for any gaps.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@joebowbeer hello, I've implemented your suggestions
@damienleger I applied the PSS/restricted policies to your branch and there are (only!) two failures:
kyverno apply <(kustomize build http://github.com/kyverno/policies//pod-security) -r \
<(wget -qO- https://raw.githubusercontent.com/damienleger/autoscaler/container_security_context/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml)
policy disallow-host-path -> resource kube-system/Deployment/cluster-autoscaler failed:
1. autogen-host-path: validation error: HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset. rule autogen-host-path failed at path /spec/template/spec/volumes/0/hostPath/
policy restrict-volume-types -> resource kube-system/Deployment/cluster-autoscaler failed:
1. autogen-restricted-volumes: Only the following types of volumes may be used: configMap, csi, downwardAPI, emptyDir, ephemeral, persistentVolumeClaim, projected, and secret.
It's still complaining about the ssl-certs volume and mount; I don't know if there is a way around this:
volumeMounts:
- name: ssl-certs
mountPath: /etc/ssl/certs/ca-certificates.crt #/etc/ssl/certs/ca-bundle.crt for Amazon Linux Worker Nodes
readOnly: true
volumes:
- name: ssl-certs
hostPath:
path: "/etc/ssl/certs/ca-bundle.crt"
@joebowbeer I see, nice. About this hostPath volume, only way around this I think of right now is a) mount an EmptyDir volume in /etc/ssl/certs/ and download the CA bundles inside via initContainer or some script at pod startup b) embed the CA bundle inside docker image
a) is okay I think but more complicated, b) feels not viable in time (image can embed deprecated CA)
I think it's okay to let hostPath like this. After all cluster-autoscaler is a kube-system workload. I am a kyverno user myself btw, and I have exceptions on kube-system workloads and workloads taking care of monitoring and logshipping; it's not shocking to me.
@damienleger I agree. LGTM
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: damienleger, joebowbeer
Once this PR has been reviewed and has the lgtm label, please assign jaypipes for approval by writing /assign @jaypipes
in a comment. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
hello @joebowbeer , is something expected for me to get this PR merged? I'm confused by the process
hello @joebowbeer , is something expected for me to get this PR merged? I'm confused by the process
Hi, I don't know any more than the bot comment says.
It looks like @gjtempleton is assigned to review; once reviewed @jaypipes will/should be assigned to approve.
I would try posting in #sig-autoscaling on Kubernetes slack. All the listed owners are there.
Can you take a look?
/assign @jaypipes @gjtempleton
Thanks!
/lgtm /approve
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: damienleger, gjtempleton, joebowbeer
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~cluster-autoscaler/cloudprovider/aws/OWNERS~~ [gjtempleton]
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment