aws-sandbox
aws-sandbox copied to clipboard
Experiment with different Infra-as-Code design patterns, tools and the like using AWS environment resources
aws-sandbox
Project Summary:
This repository serves as an example project where you can experiment with different "stacks" using Terramate following generally good design practices.
Table of contents
- Project Summary
- Diagram of what we are building
- Available Technologies/Tools
- Project Notes
- Project Walkthrough
- Choice of Provisioning Methods
- Common Configurations
- Provisioning Method 1: Running from your local system
- Provisioning Method 2: Running within a built Docker container
- Project Exercises
- Using Fairwinds Pluto
- Karpenter Testing
- Migrate GP2 EBS Volume to a GP3 Volume using Snapshots
- Running Infracost and Pluralith
- Cleanup
Diagram of what we are building

Available Technologies/Tools:
CI Pipeline Related:
- Github Actions
- AquaSec TFsec
- Infracost
Infra-as-Code and Orchestration Related:
- Terraform
- Terraform Cloud
- Terramate
- Pluralith
Kubernetes Related:
- ContainerD
- Helm
- Karpenter
- IRSA using OIDC
- AWS Load Balancer Controller
- VPC CNI
- Amazon EBS CSI Driver
- External Snapshotter Controller
- Prometheus Operator Stack
- metrics-server
- KubeCost
AWS Services:
- VPC
- NAT Instance Gateways
- Identity and Access Management
- EKS - Managed Kubernetes Service
- EC2 Instances
- Launch Templates
- Autoscaling Groups
- Elastic Load Balancers
- KMS - Key Management Service
Project Notes:
Some of the documented components/services in the diagram have yet to be added. (See Available Technologies/Tools above)
- Therefore, they will be missing until they are added to the project.
When provisioning the "dev" stack (stacks/dev) by default it's set to "remote" (Terraform Cloud) for backend state storage.
- You will need to modify the
tfe_organizationglobal variable in thestakcs/config.tm.hclfile with your Organization ID. - You can also opt to use the "local" backend storage by setting the global variable
isLocaltotruein thestacks/dev/config.tm.hclfile.
We might recommend using a sandbox or trial account (ie. A Cloud Guru Playground) when initially using the project.
- This protects users from accidently causing any risk/issues with their existing environments/configurations.
- Using a sandbox account can also prevent any naming collisions during provisioning with their existing resources.
There are a lot of opportunities for optimizing the config for this project. (This was intentional!)
- This project was intended for testing purposes of sample Infra Code, which is used to illustrate how you might structure your project.
Those running an ARM CPU architecture (ie. Apple's M1) might find it challenging when trying to use the project.
- This is due to lack of current support of compiled binaries for ARM and lack of native emulation (Rosetta 2 expected as part of OSX 13 Ventura).
Project Walkthrough:
Choice of Provisioning Methods
- Method 1: Running from your local system (tested on OSX 10.15 Catalina)
- Method 2: Running within a custom Docker image
Binary Prerequisites:
Required for Method 1
- git (v2.x)
- jq (any version)
- make (any version)
- aws-cli (v2.7)
- terramate (v0.1.35+)
- terraform (v1.2.9+)
- kubectl (v1.19+)
Required for Method 2
- docker [v20.10+]
Common Configurations (necessary for either method)
Set your AWS variables on your local system
export AWS_DEFAULT_REGION='us-west-2'
export AWS_ACCESS_KEY_ID='<PASTE_YOUR_ACCESS_KEY_ID_HERE>'
export AWS_SECRET_ACCESS_KEY='<PASTE_YOUR_SECRET_ACCESS_KEY_HERE>'
Provisioning Method 1: Running from your local system
Generate Terraform code and Provision the Terramate Stacks
# Terramate Generate
terramate generate
git add -A
# Terraform Provisioning
cd stacks/local
terramate run -- terraform init
terramate run -- terraform apply
EKS Cluster Configuration:
# Adds the EKS Cluster Configure/Creds (Change cluster name if necessary!)
aws eks update-kubeconfig --name ex-eks
# Edit Kube Config to Connect to cluster (Add to the bottom of the "Users" section of the config...)
cat <<EOT >> ~/.kube/config
env:
- name: AWS_ACCESS_KEY_ID
value: ${AWS_ACCESS_KEY_ID}
- name: AWS_SECRET_ACCESS_KEY
value: ${AWS_SECRET_ACCESS_KEY}
EOT
Provisioning Method 2: Running within a built Docker container
Build Image and Start Container
make build && make start
Exec into Docker Container Shell
make exec
Generate Terraform code and Provision the Terramate Stacks
# Source Script Functions
source functions.sh
# Example: Changing Directory into the "Local" Stack
cd /project/stacks/local
# Terramate Commands (Generate/Validate/Apply)
tm-apply
Configures Kubernetes CLI (Config/Credentials)
eks-creds
Project Exercises:
Using Fairwinds Pluto:
Check for Deprecated/Removal of Resources
pluto detect-helm -o wide -t k8s=v1.25.0
pluto detect-api-resources -o wide -t k8s=v1.25.0
Karpenter Testing:
Scale the Deployment causing Karpenter to Add/Scale-Up Nodes
kubectl scale deployment inflate --replicas 2
Scale the Deployment causing Karpenter to Removes/Scale-Down Nodes
kubectl scale deployment inflate --replicas 0
Migrate GP2 EBS Volume to a GP3 Volume using Snapshots
Creates an EC2 Snapshot from existing Volume (example using KubeCost)
# Returns the PVC ID from the Persistent Volune
PVC_ID=$(kubectl -n kubecost get pv -o json | jq -r '.items[1].metadata.name')
# Note: If the following command doesn't return a value for VOLUME_ID it's likely the volume is already managed by the new
# EBS CSI, which is the new default gp3 StorageClass. If this occurs please use this "alternate" command to continue exercise.
# Use this for gp2 volume types
VOLUME_ID=$(kubectl get pv $PVC_ID -o jsonpath='{.spec.awsElasticBlockStore.volumeID}' | rev | cut -d'/' -f 1 | rev)
# Alternate command for use with gp3 volume types
VOLUME_ID=$(kubectl get pv $PVC_ID -o jsonpath='{.spec.csi.volumeHandle}' | rev | cut -d'/' -f 1 | rev)
# Creates the Snapshot from the Volume / Persistent Volume
SNAPSHOT_RESPONSE=$(aws ec2 create-snapshot --volume-id $VOLUME_ID --tag-specifications 'ResourceType=snapshot,Tags=[{Key="ec2:ResourceTag/ebs.csi.aws.com/cluster",Value="true"}]')
Wait for Snapshot to Complete (Run this until it reports Completed)
aws ec2 describe-snapshots --snapshot-ids $(echo "${SNAPSHOT_RESPONSE}" | jq -r '.SnapshotId')
Create Volume Snapshot CRDs to Provision a Volume from Snapshot
cat <<EOF | kubectl apply -f -
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotContent
metadata:
name: imported-aws-snapshot-content # <-- Make sure to use a unique name here
spec:
volumeSnapshotRef:
kind: VolumeSnapshot
name: imported-aws-snapshot
namespace: kubecost
source:
snapshotHandle: $(echo "${SNAPSHOT_RESPONSE}" | jq -r '.SnapshotId')
driver: ebs.csi.aws.com
deletionPolicy: Delete
volumeSnapshotClassName: ebs-csi-aws
EOF
cat <<EOF | kubectl apply -f -
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: imported-aws-snapshot
namespace: kubecost
spec:
volumeSnapshotClassName: ebs-csi-aws
source:
volumeSnapshotContentName: imported-aws-snapshot-content # <-- Here is the reference to the Snapshot by name
EOF
Creates the Peristent Volune Claim with the newly created VolumeSnapshot
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: imported-aws-snapshot-pvc
namespace: kubecost
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp3
resources:
requests:
storage: 32Gi
dataSource:
name: imported-aws-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
EOF
Patch Deployment with new Volunme Claim
kubectl -n kubecost patch deployment kubecost-cost-analyzer --patch '{"spec": {"template": {"spec": {"volumes": [{"name": "persistent-configs", "persistentVolumeClaim": { "claimName": "imported-aws-snapshot-pvc"}}]}}}}'
Running Infracost and Pluralith:
Run Infracost for Cost Estimation (Requires an Account)
# Set Pluralith Credentials
export INFRACOST_API_KEY="<INFRACOST_API_KEY_HERE>"
export INFRACOST_ENABLE_DASHBOARD=true
# Generated Cost Usage Report
terramate run -- infracost breakdown --path . --usage-file ./infracost-usage.yml --sync-usage-file
Run Pluralith for Generated Diagrams (Requires an Account)
# Set Pluralith Credentials
export PATH=$PATH:/root/.linuxbrew/Cellar/infracost/0.10.13/bin
export PLURALITH_API_KEY="<PLURALITH_API_KEY_HERE>"
export PLURALITH_PROJECT_ID="<PLURALITH_PROJECT_ID_HERE>"
# Run Pluralith Init & Plan
terramate run -- pluralith init --api-key $PLURALITH_API_KEY --project-id $PLURALITH_PROJECT_ID
terramate run -- pluralith run plan --title "Stack" --show-changes=false --show-costs=true ----cost-usage-file=infracost-usage.yml
Cleanup
Destroy Provisioned Infrastructure:
terramate run --reverse -- terraform destroy