gke-terraform-example
gke-terraform-example copied to clipboard
A sample web app deployment on Google Kubernetes Engine
Google Kubernetes Engine full-stack example
An example of deploying a web app on GKE. Consists of
- GKE cluster with a single node pool
- VPC-native, private and using container-native load-balancing
- access to cluster master is limited to a single whitelisted IP: check the
K8S_MASTER_ALLOWED_IPenv variable below
- Cloud SQL Postgres instance with private networking
- connects to GKE through a private IP, ensuring traffic is never exposed to the public internet
- Cloud Storage and Cloud CDN for serving static assets
- Cloud Load Balancing routing
/api/*to GKE and the rest to the static assets bucket- implemented in a bit of a roundabout way since
ingress-gcelacks support for backend buckets: we're passing GKE backend's name in Terraform variables and attaching it to our default URL map
- implemented in a bit of a roundabout way since
- Cloud DNS for domain management
- check
ROOT_DOMAIN_NAME_<ENV>below
- check
- Terraform-defined infrastructure
- using
kubectldirectly instead of the Kubernetes Terraform provider as the latter is missing an Ingress type, among others
- using
- CircleCI pipeline
- push to any non-master branch triggers
devdeployment & push tomasterbranch triggerstestdeployment proddeployment triggered by an additional approval step at CircleCI UI
- push to any non-master branch triggers

Setup
The following steps need to be completed manually before automation kicks in:
- Create a new Google Cloud project per each environment
- For each Google Cloud project,
- set up a Cloud Storage bucket for storing remote Terraform state
- set up a service IAM account to be used by Terraform. Attach the
EditorandCompute Network Agentroles to the created user
- Set environment variables in your CircleCI project (replacing
ENVwith an uppercaseDEV,TESTandPROD):GOOGLE_PROJECT_ID_<ENV>: env-specific Google project idGCLOUD_SERVICE_KEY_<ENV>: env-specific service account keyDB_PASSWORD_<ENV>: env-specific password for the Postgres user that the application usesROOT_DOMAIN_NAME_<ENV>: env-specific root domain name, e.g.dev.example.comK8S_MASTER_ALLOWED_IP: IP from which to access cluster master's public endpoint, i.e. the IP where you runkubectlat (read more)- In CircleCI we temporarily whitelist the test host IP in order to run
kubectl
- In CircleCI we temporarily whitelist the test host IP in order to run
- Enable the following Google Cloud APIs:
cloudresourcemanager.googleapis.comcompute.googleapis.comcontainer.googleapis.comcontainerregistry.googleapis.comdns.googleapis.comservicenetworking.googleapis.comsqladmin.googleapis.com
You might also want to acquire a domain and update your domain registration to point to Google Cloud DNS name servers.
Manual deployment
You can also sidestep CI and deploy locally:
- Install terraform, gcloud and kubectl
- Login to Google Cloud:
gcloud auth application-default login - Update infra:
cd terraform/dev && terraform init && terraform apply - Follow instructions on building and pushing a Docker image to GKE:
cd appexport PROJECT_ID="$(gcloud config get-value project -q)"docker build -t gcr.io/${PROJECT_ID}/gke-app:v1 .gcloud docker -- push gcr.io/${PROJECT_ID}/gke-app:v1
- Authenticate
kubectl:gcloud container clusters get-credentials $(terraform output cluster_name) --zone=$(terraform output cluster_zone) - Render Kubernetes config template:
terraform output k8s_rendered_template > k8s.yml - Update Kubernetes resources:
kubectl apply -f k8s.yml
Read here on how to connect to the Cloud SQL instance with a local psql client.
Further work
- Cloud SQL high availability & automated backups
- regional GKE cluster
- GKE autoscaling
- Cloud Armor DDoS protection
- SSL
- tune down service accounts privileges
- possible CI improvements:
- add a step to clean up old container images from GCR
- prompt for extra approval on infra changes in master
- don't rebuild docker image from
testtoprod