advanced-kubernetes-security-workshop
advanced-kubernetes-security-workshop copied to clipboard
Advanced Kubernetes Security Workshop
This repository contains the sample code and scripts for Kubernetes Security Best Practices, which is a guided workshop to showcase some of Kubernetes and GKE's best practices with respect to security.
Setup
-
The sample code and scripts use the following environment variables. You should set these to your associated values:
PROJECT_ID="..."
-
Configure the project:
./bin/00-configure.sh
-
Create a GKE cluster which will run as the attached service account:
./bin/01-create-cluster.sh
-
Show that GKE nodes are not publicly accessible:
gcloud compute instances list --project $PROJECT_ID
-
SSH into the bastion host:
gcloud compute ssh my-bastion \ --project $PROJECT_ID \ --zone us-central1-b
-
Install
kubectl
command line tool:sudo apt-get -yqq install kubectl
-
Authenticate to talk to the GKE cluster:
gcloud container clusters get-credentials my-cluster \ --region us-central1
-
Explore cluster:
kubectl get po -n kube-system
Aduit Logging
-
Enable system-level audit logs:
curl -sf https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-node-tools/master/os-audit/cos-auditd-logging.yaml | kubectl apply -f -
Events will show up as "linux-auditd" events in Cloud Ops under "GCE VM Instance".
-
To disable system audit logs:
kubectl delete ns cos-auditd
Network Policy
-
Deploy and expose an nginx container:
kubectl create deployment nginx --image nginx
kubectl expose deployment nginx --port 80
-
Exec into a shell container and show that nginx is accessible:
kubectl run busybox --rm -it --image busybox /bin/sh
wget --spider --timeout 2 nginx
-
Create a network policy that restricts access to the nginx pod:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: access-nginx spec: podSelector: matchLabels: app: nginx ingress: - from: - podSelector: matchLabels: can-nginx: "true"
Apply this with
kubectl apply -f
. -
Exec into a shell container and show that nginx is no longer accessible:
kubectl run busybox --rm -it --image busybox /bin/sh
wget --spider --timeout 2 nginx
-
Start pod with label and try again:
kubectl run busybox --rm -it --labels "can-nginx=true" --image busybox /bin/sh
wget --spider --timeout 2 nginx
-
Delete the nginx deployment:
kubectl delete deployment nginx
kubectl delete svc nginx
kubectl delete networkpolicy access-nginx
Pod Security Policy
-
Create a psp that prevents pods from running as root:
apiVersion: extensions/v1beta1 kind: PodSecurityPolicy metadata: name: restrict-root spec: privileged: false runAsUser: rule: MustRunAsNonRoot seLinux: rule: RunAsAny fsGroup: rule: RunAsAny supplementalGroups: rule: RunAsAny volumes: - '*'
Apply this with
kubectl apply -f
. -
Update the cluster to start enforcing psp:
gcloud beta container clusters update my-cluster \ --project $PROJECT_ID \ --region us-central1 \ --enable-pod-security-policy
Note: this process can take many minutes on an existing cluster.
Container Security Context
-
First, demonstrate that a container will run as root unless otherwise specified:
apiVersion: v1 kind: Pod metadata: name: demo spec: containers: - name: demo image: busybox command: ["sh", "-c", "sleep 1h"]
Apply this with
kubectl apply -f
. -
Show that the container is running as root:
kubectl exec -it demo -- /bin/sh
ps # ... id uid=0(root) gid=0(root) groups=10(wheel) touch foo # succeeds
kubectl delete po demo
-
Create a container with a securityContext:
apiVersion: v1 kind: Pod metadata: name: demo spec: securityContext: runAsUser: 1000 runAsGroup: 2000 fsGroup: 3000 containers: - name: demo image: busybox command: ["sh", "-c", "sleep 1h"] securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true
Apply this with
kubectl apply -f
. -
Show that the container is running as an unprivileged user:
kubectl exec -it demo -- /bin/sh
ps # ... id uid=1000 gid=2000 touch foo Read-only file system
kubectl delete po demo
-
Create a container with a apparmor, seccomp, and selinux options:
apiVersion: v1 kind: Pod metadata: name: demo annotations: seccomp.security.kubernetes.io/demo: runtime/default container.apparmor.security.kubernetes.io/demo: runtime/default spec: securityContext: runAsUser: 1000 runAsGroup: 2000 fsGroup: 3000 seLinuxOptions: level: "s0:c123,c456" containers: - name: demo image: busybox command: ["sh", "-c", "sleep 1h"] securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true
Apply this with
kubectl apply -f
.Delete it when you're done:
kubectl delete po demo
Sandbox
-
Deploy under gvisor:
apiVersion: v1 kind: Pod metadata: name: demo spec: runtimeClassName: gvisor securityContext: runAsUser: 1000 runAsGroup: 2000 containers: - name: demo image: busybox command: ["sh", "-c", "sleep 1h"] securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true
Apply this with
kubectl apply -f
. -
Show that the container is running under gvisor:
kubectl get po demo -o yaml # runtimeClassName should be gvisor
kubectl delete po demo
Workload Identity
Suppose I want to give a Kubernetes service account permissions to talk to a Google Cloud API. I can do this using Workload Identity!
Note that IAM is eventually consistent, so permission changes on the GCP service account will not be immediately available to pods.
-
Ensure the
--identity-namespace
flag was passed into the cluster. -
Create a Google service account which will be mapped to a Kubernetes Service Account shortly:
gcloud iam service-accounts create my-gcp-sa \ --project $PROJECT_ID
-
Give the Google service account the ability to mint tokens:
gcloud projects add-iam-policy-binding $PROJECT_ID \ --role roles/iam.serviceAccountTokenCreator \ --member "serviceAccount:my-gcp-sa@${PROJECT_ID}.iam.gserviceaccount.com"
-
Give the Google service account viewer permissions (so we can test them later inside a pod):
gcloud projects add-iam-policy-binding $PROJECT_ID \ --role roles/viewer \ --member "serviceAccount:my-gcp-sa@${PROJECT_ID}.iam.gserviceaccount.com"
-
Create a Kubernetes Service Account:
kubectl create serviceaccount my-k8s-sa
-
Allow the KSA to use the GSA:
gcloud iam service-accounts add-iam-policy-binding \ --project $PROJECT_ID \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:${PROJECT_ID}.svc.id.goog[default/my-k8s-sa]" \ my-gcp-sa@${PROJECT_ID}.iam.gserviceaccount.com
kubectl annotate serviceaccount my-k8s-sa \ iam.gke.io/gcp-service-account=my-gcp-sa@${PROJECT_ID}.iam.gserviceaccount.com
-
Deploy a pod with the attached service account:
kubectl run -it --rm \ --image gcr.io/google.com/cloudsdktool/cloud-sdk:slim \ --serviceaccount my-k8s-sa \ demo
gcloud auth list
gcloud compute instances list
gcloud compute instances create foo --zone us-central1-b # fails