symfony-gke
symfony-gke copied to clipboard
A recipe for deploying a Symfony application on the Google Kubernetes Engine
= Symfony application on GKE :author: Your Name :email: your@email :revnumber: 0.3 :revdate: 2021-10-15 :revremark: :version-label!: :sectnums: :toc: :toclevels: 3 :imagesdir: docs/architecture-images :source-highlighter: highlightjs :highlightjsdir: ../github/highlight // Unfortunately Github doesn't support include statements ifndef::env-github[] include::docs/README.config.adoc[] endif::[] ifdef::env-github[] // Copy the included section :GCP_PROJECT: myproject-123456 :GCP_REGION: us-central1 :GCP_ZONE: us-central1-c :PROJECT: myproject :GITHUB_REPO: myproject :GITHUB_ACCOUNT: myorganization :WEB_URL: myproject.com :GKE_CLUSTER: myproject endif::[]
== Architecture
=== What is this for?
This is a recipe for deploying a Symfony application on the Google Kubernetes Engine. It includes all the configuration files and build scripts.
It is based on the implementation notes of a recent project. I tidied them up so that they could be shared. Hope this helps.
How to use it:
- Download the repo
- Prepare your local and remote infrastructure as explained on this page
- Install
composer.json
- Customize
sf/composer.json
if needed andcomposer install
- Adapt the
conf/env
files to your environments - Adapt the
docs/README.config.adoc
files to your project configuration - Start developing your Symfony application
=== Principles
Key principles underlying this design:
. Adhere to the 12-factors principles . Identical application code deployed in local and remote environments . Near-identical infrastructure code deployed in local and remote environments . Ready to deploy on the GKE (Google Kubernetes Engine)
=== Assumptions
. Code repository in a private Github repo (the code is easy to adapt if your code sits somewhere else) . A web reverse proxy has been deployed in the GKE cluster, behind a service of type LoadBalancer (see https://github.com/ericjacolin/apache-proxy-k8s[this recipe] for deploying an Apache+Letsencrypt web server
=== Architecture overview
==== Local architecture image::architecture-local.png[]
Kubernetes:
- Kubernetes cluster on Minikube, with Virtualbox VM back-end
- Includes its own Docker environment
- The data volumes on the host are mounted on the
/hosthome
VM location, where they are visible to the Kubernetes cluster. From there they can be mounted as persistent volumes onto the container - Connect to a MySQL database on the host, external to the cluster
The application manages two types of content files:
- Public files, served directly by the web proxy server
- Private files, subject to access control, served by the Symfony application
- File storage abstraction using Flysystem, in
local
mode
==== Remote architecture image::architecture-remote.png[]
Kubernetes:
- GKE (Google Kubernetes Engine) Autopilot cluster
- GCP Docker image registry
- Permanent storage on Google Cloud Storage buckets, public buckets for public files, private buckets for private files
- Connect to Google Cloud SQL/ MySQL database
Application content files:
- Public files are served directly by the web proxy server, proxying to Cloud Storage buckets API (public buckets)
- Private files are served by the web application from private buckets
- File storage abstraction using Flysystem, in
gcloud
(Cloud Storage) mode
We use Cloud Shell to build and deploy releases, with PHP Deployer scripts:
- Pull project files from Github repo
- Build Docker images, push images to the GCP Registry
- An init container pulls Symfony files from Github and builds the Symfony application, calling
composer install
- Update GKE deployment manifest with new image tag
==== Identical code
With this recipe, we have identical Symfony application code and Kubernetes services, deployments and cronjobs definitions across all environments, local as well as remote.
All differences between environments are reflected in project-level .env
configuration files.
==== Limitations
The architecture is suited for a relatively simple web application, with relatively modest traffic and SLAs, or a MVP.
The limitations, deliberate for a simple project, can easily be extended as needed as follows.
Here we assume that an Apache server in reverse proxy mode is deployed on a free tier Compute Engine VM. For bigger sites one would typically use HTTPS Load Balancers which are expensive.
[cols="3*", options="header"] |=== |Limitation |Rationale |How to extend
|Sessions stored in the container |Single pod so no need to implement session affinity |Implement session affinity in the load balancer or reverse proxy + Alternatively store session information in Google Cloud Memorystore (managed Redis service)
|Symfony logs stored locally |Monolog configured to send emails at a certain alert level |Send Symfony logs to Stackdriver
|Partial CI/CD |Deployment by manual execution of a Deployer script in Cloud Shell + (Github web hooks cannot access Cloud Shell) |Deploy Jenkins on GCP + Use Cloud Source Repository instead of Github
|No test automation in the deployment |Functional tests are executed locally prior to committing |Add test tasks to the Deployer script
|Single pod used for web and batch |Load on web pod can accommodate batch jobs |Deploy a separate pod dedicated to batch jobs
|Symfony Mailer does not yet supports multiple asynchronous transports |Limitation of the current Mailer version; expecting this to be fixed soon (https://github.com/symfony/symfony/issues/35750[Issue]) +
For low volumes a single transport suffices |
---|
=== |
=== Environments
Environments are defined by two meta-parameters
- HOST_ENV:
**
local
(developer's laptop) **remote
(GCP) - APP_ENV: ** any name: dev, master, prod, oat, etc. ** avoid reusing the same name in both a local and a remote environment, since Symfony will use override configuration files based on the APP_ENV name; those overrides are likely to be different in a local and a remote deployment
Symfony configuration .env
files are named using these two meta-parameters, and named .env.{APP_ENV}.{HOST_ENV}
, such as .env.oat.remote
.
.env
file contain all environment parameters needed by either Docker, PHP Deployer or Symfony.
They contain all environment-specific parameters except secrets.
Symfony looks at OS environment variables when it can't find an environment variable
in the Symfony .env
file.
In the local environment, the Symfony working directory is mounted externally on the container, thus code changes are visible immediately.
To switch between environments in the local hosting:
- Copy the relevant
.env
file fromconf/env
to the Symfony root foldersf
- Check out the master or dev branch (or feature branch as the case may be)
- (the
.env
file is built by the build process, not committed to source control)
In the remote (GCP) environments, the build process selects the relevant Symfony .env
file
and ADD's it to the Docker container.
=== Folder structure
The project folder structure is as follows:
[cols="1,2,2", options="header"] |=== |Folder |Contents |Comments
|<root>
|Project root, git root
|
|{vbar}-- assets
|Assets to build with Webpack Encore
|css, js
|{vbar}-- build
|Deployment built artefacts (on local deployments)
|Is emptied at the beginning of a build process. Gitignored
|{vbar}-- conf
|Project configuration files
|
|{nbsp}{nbsp}{nbsp}{vbar}-- deployer
|PHP Deployer scripts, Deployer hosts configuration
|
|{nbsp}{nbsp}{nbsp}{vbar}-- docker
|Docker image templates
|Web and batch components
|{nbsp}{nbsp}{nbsp}{vbar}-- env
|Environment variables
|Depend on <HOST_ENV> and <APP_ENV>
|{nbsp}{nbsp}{nbsp}{vbar}-- infra
|Container configuration file templates
|Apache, PHP, msmtp +
Docker images include the dockerize
script, which substitutes environment
variables at container build time
|{nbsp}{nbsp}{nbsp}{vbar}-- k8s
|Kubernetes manifest templates: service, deployment, cronjob
|Depend on <HOST_ENV>
|{vbar}-- docs
|Project documentation
|
|{vbar}-- sf
|Symfony project root folder
|The Symfony .env
file is built dynamically at build time based on dynamically selected .env.<HOST_ENV>.<APP_ENV>
file
|{vbar}-- vendor
|PHP libraries used by Deployer
|Managed by Composer, distinct from PHP libraries of the Symfony application which are
managed under the sf
folder
|===
==== Deployer
https://deployer.org/[Deployer] is a simple deployment tool written in PHP. It is open source and free. It contains pre-defined recipes designed for traditional FTP deployments; those are not useful in a Kubernetes context, so we wrote new scripts from scratch.
We use Deployer scripts to:
. Generate service/ cronjob manifests (usually done only once) . Generate deployment manifests (usually done only once) . Deploy new container version (done at every release, only remotely)
We run Deployer scripts on:
- local laptop for local environment
- Cloud Shell for GCP environments
The scripts take the parameters:
-
APP_ENV
-
TAG
: ** In remote environments, a git tag version is pulled from Github and deployed ** In local environment: 'current'. The container needs only rebuilding infrequently, as it mounts the Symfony working directory, obscuring the ADD directive in the Dockerfile, thus serves whatever is currently checked out in the working directory. Note that we use 'current' instead of 'latest' as 'latest' forces a rebuild of the container, which we don't want locally.
Outline of the remote build process:
. Execute an initialisation container which:
.. Checks out the tagged version from Github (into a detached branch)
.. Copies relevant Symfony application files from source
.. Copies the relevant .env.<APP_ENV>.remote
file to both build/.env
and to sf/.env
.. Warms up the Symfony application cache, which is needed by the PHP Opcache directive and must exist
at the time the web container starts
. Build the application container:
.. Passing the build/.env
as environment parameters
.. ADD the Symfony application and cache files from the init container
.. COPY infrastructure configuration templates
.. RUN dockerize
on infrastructure configuration templates (see next section)
.. docker push the new container to the GCP container registry
. Build a deployment manifest to build/deployment.yml
. This manifest contains the new container tag
.. Apply the updated Kubernetes deployment manifest
Notes:
- In the web container, the Apache user (www-data) has user:group id 1000:33, whereas in the init container it has user:group id 33:33. This explains the chown commands in the initialisation container
- Another approach would be to use the GCP native Cloud Build service (but this is less portable)
- https://vsupalov.com/build-docker-image-clone-private-repo-ssh-key/[SSH key as secret]
==== Infrastructure configuration
The following container infrastructure files are templated. Running dockerize
interpolates
placeholders in the templates with variables from the Docker build .env
file.
[cols="1,2,2", options="header"] |=== |Template |Target file in container |Contents
|msmtp.logrotate
|/etc/logrotate.d/msmtp
|msmtp logrotate configuration
|msmtprc
|/etc/msmtprc
|msmtp configuration. +
Note that the SMTP password is not stored in clear but obtained from an OS environment
variable.
|php.ini
|/usr/local/etc/php/php.ini
+
/etc/php/7.3/apache2/php.ini
|php.ini for CLI and the Apache PHP module
|ssh_config
|/.ssh/config
|Location of the SSH key to the Github private repo. Used by the webinit initialisation container to build
the Symfony application files
|virtual-host.conf
|/etc/apache2/sites-enabled/virtual-host.conf
|Single virtual host for the web application
|===
Notes:
- For more info on dockerize, https://github.com/powerman/dockerize[see].
- See also
conf/docker/Dockerfile.PHP.example
for a typical Docker RUN command with commonly used PHP libraries. Adapt as needed.
==== Secrets management
We use two types of secrets:
- Kubernetes secrets, mounted onto containers:
**
MAILER_PASSWORD
: SMTP account password **API_KEY
: API key used by the cron service - Symfony application secrets, packed into a single
SYMFONY_DECRYPTION_SECRET
Kubernetes secret: **APP_SECRET
: encryption key **DB_PASSWORD
: MySQL account password **MAILER_PASSWORD
: SMTP account password
The MAILER_PASSWORD
secret, although used by the Symfony application, is needed outside the Symfony environments, to send emails via the batch cron container.
With this container build process, secrets only exist as container OS environment variables.
See: https://symfony.com/doc/current/configuration/secrets.html[Managing Symfony secrets]
== Symfony application
=== Proxies
In order for the application to correctly reads the headers forwarded by the web reverse proxy.
.sf/config/packages/framework.yaml
framework: ... trusted_proxies: '%env(TRUSTED_PROXIES)%' trusted_headers: ['x-forwarded-for', 'x-forwarded-host']
https://symfony.com/doc/current/deployment/proxies.html[See] for reference.
=== Batch jobs
A Symfony application is typically used in two modes: online requests and batch jobs.
For this recipe we use a single container to serve both.
We define batch jobs as Kubernetes cronjobs. Those jobs do the following:
- Instantiate a simple Alpine/curl container in the cluster
- The container command sends a curl GET request to the application pod inside the cluster
- The request is handled by a normal Symfony controller
- Complete job and log the job status depending on the HTTP response (200 or 500)
Note that in a traditional, non-containerized Symfony application we would implement Console Commands in the controller, triggered by command line php calls, scheduled by a cron job. We can't do this with Kubernetes, since the Kubernetes cronjob cannot execute remote shell commands on the container and is limited to sending HTTP requests.
A simpler alternative is to define cron jobs inside your web proxy container, calling application containers using the same API endpoints.
=== Sending emails
To send emails, we use the following components:
- The Symfony Mailer library to create emails
- The Symfony Messenger to queue emails in the database
- The msmtp MTA (message transfer agent) to send emails
- Kubernetes cronjobs to process Messenger queues
The new Mailer library replaces the deprecated SwiftMailer library and is now the recommended library for new projects.
For transport the Symfony application does not establish a SMTP connection to the remote SMTP server, but instead sends messages to a local MTA running in the container. We use msmtp as MTA. msmtp is a popular successor to sendmail, easier to configure, with a sendmail-compatible API. The benefits of using a MTA are:
- The messages are sent to the remote SMTP server by the MTA background process, not PHP scripts
- The MTA handles exceptions well, such as SMTP server unavailable or returning errors.
Here we configure two asynchronous transports:
-
realtime_mailer
for high priority emails (e.g. confirmation after registration) -
batch_mailer
for low priority emails (e.g. batch newsletter)
These transports are configured in Sendmail
mode.
The process of sending an email is the following:
- A Symfony controller action (online or batch) generates an email, indicates the transport method (high or low priority)
- The Messenger component puts these messages in either queue in the database
- The Messenger component processes these two queues in batch mode and sends the emails to the msmtp MTA (which runs on the same container) ** The high priority batch job runs every 2mn with a time-out of 100s. If not all queued emails are processed, the next run starting 20s after will pick them up ** The low priority batch job runs every 20mn
- The MTA sends the emails to the remote SMTP server roughly in the order it has received them.
See previous section for how cronjobs are implemented in Kubernetes.
In the messenger configuration file we define the queue where to put emails.
Here we use the application database to persist the queues (other methods are available,
notably Redis). We pass the queue_name
argument to indicate the queue.
We also define a dead-letter queue where failed messages will be logged. This will be rare as it is the MTA that is likely to fail while sending emails instead of the Symfony application.
.config/packages/messenger.yaml [source,yaml]
framework:
messenger:
# Uncomment this (and the failed transport below) to send failed messages to this transport for later handling.
failure_transport: failed
transports:
# https://symfony.com/doc/current/messenger.html#transport-configuration
failed: 'doctrine://default?queue_name=failed'
# sync: 'sync://'
batch_mailer: 'doctrine://default?queue_name=batch_mailer'
realtime_mailer: 'doctrine://default?queue_name=realtime_mailer'
routing:
# Route your messages to the transports
'Symfony\Component\Mailer\Messenger\SendEmailMessage': realtime_mailer
In the mailer configuration file we define the action to take when actually sending an email.
Usually this is either a SMTP DSN string or Sendmail. Here we use Sendmail. The native://default
option
uses the sendmail_path
setting of php.ini, itself defined as /usr/bin/msmtp -t -v
. See infra configuration
files.
.config/packages/mailer.yaml [source,yaml]
framework: mailer: transports: #main: '%env(MAILER_DSN)%' realtime_mailer: 'native://default'
Mailer is a new library, and has currently some limitations that we expect to be remediated soon:
- No support for multiple async transports (https://github.com/symfony/symfony/issues/35750[See])
The msmtp configuration is defined in the conf/infra/msmtprc.tpl
template file.
It contains:
- SMTP account details
- The password is not stored in clear but is defined by a shell command:
"echo $MAILER_PASSWORD"
- Location of the msmtp log file
=== File storage
For application user file management we use the Flysystem library, which provides filesystem abstraction across a number of storage mechanisms.
- The local environment uses the
local
adapter (local file system) - The GCP environment uses the
gcloud
adapter (Cloud Storage).
The application code to read/ write files is identical in all environments. Only the environment configuration
changes. The storage mechanics are abstracted from the application. The application uses get
and put
methods for file paths on a virtual file system.
Typically the application needs to manage:
- Public files, served directly by the web proxy without access control
- Private files, subject to access control, and served by the Symfony application
- Each for local and remote file storage
Hence four adapter configuration:
.config/packages/flysystem.yaml [source,yaml]
flysystem:
storages:
storage.private.local:
adapter: 'local'
options:
directory: '%kernel.project_dir%/../local-storage/%env(GCS_PRIVATE_BUCKET)%'
storage.public.local:
adapter: 'local'
options:
directory: '%kernel.project_dir%/../local-storage/%env(GCS_PUBLIC_BUCKET)%'
storage.private.gcloud:
adapter: 'gcloud'
options:
client: 'Google\Cloud\Storage\StorageClient' # The service ID of the Google\Cloud\Storage\StorageClient instance
bucket: '%env(GCS_PRIVATE_BUCKET)%'
prefix: ''
api_url: 'https://storage.googleapis.com'
storage.public.gcloud:
adapter: 'gcloud'
options:
client: 'Google\Cloud\Storage\StorageClient' # The service ID of the Google\Cloud\Storage\StorageClient instance
bucket: '%env(GCS_PUBLIC_BUCKET)%'
prefix: ''
api_url: 'https://storage.googleapis.com'
# Aliases based on environment variable
storage.private:
adapter: 'lazy'
options:
source: 'storage.private.%env(STORAGE_ADAPTER)%'
storage.public:
adapter: 'lazy'
options:
source: 'storage.public.%env(STORAGE_ADAPTER)%'
Thus Symfony creates four "real" services (flysystem.storage.private.local
, etc)
corresponding to these four adapters.
We do not use these services but dynamic "alias" (lazy) services storage.private
and storage.private
which depend on the environment.
To use these services in a controller:
[source,php]
use League\Flysystem\FilesystemInterface;
class MyController extends AbstractController { /** @var FilesystemInterface $storagePublic Public storage adapter */ private $storagePublic;
/** @var FilesystemInterface $storagePrivate Private storage adapter */
private $storagePrivate;
public function __construct(
FilesystemInterface $storagePublic,
FilesystemInterface $storagePrivate
) {
$this->storagePublic = $storagePublic;
$this->storagePrivate = $storagePrivate;
}
public function myAction(
Request $request
) {
$this->storagePrivate->put($file_path, $content);
On local hosting:
- External persistent folders on the host are mounted on the Minikube cluster, and in turn mounted as persistent volumes on the container
- The root of the virtual file system is the mount point of the persistent volume in the container
- The application read/writes to these folders using the
local
adapter
On GKE hosting:
- We use Cloud Storage buckets for storage
- The root of the virtual file system viewed from the application is the bucket
- The application read/writes to these buckets using the
gcloud
adapter, which uses API calls to Cloud Storage - The buckets must be configured for ACL access as the Symfony application will use a GCP service account to access the buckets.
The CDN_URL
environment variable is used by the application to create URLs to public
assets that are served directly by the web proxy, outside the application.
== Local environment
In the local environment we instantiate a Kubernetes engine using the Minikube package.
=== Minikube
. Install kubectl
. Install https://kubernetes.io/docs/setup/learning-environment/minikube[minikube]
via direct download
. Bind mount the host folders used to persist application user data host folder to /home
,
which is accessible from within the Minikube cluster as /hosthome
Increase the default CPU allocated to the cluster VM (2CPU) to 4CPU:
minikube delete minikube config set cpus 4 minikube start
.In the local shell: [source,bash,subs=attributes+]
sudo mount --bind /opt/data/storage-buckets /home/storage-buckets
&& sudo mount --bind /opt/data/projects/myproject /home/myproject
&& minikube start --driver=virtualbox --cpus 4
&& minikube tunnel
Then optionally open the minikube dashboard in a browser:
.In the local shell: [source,bash,subs=attributes+]
Open the minikube dashboard in a browser
minikube dashboard
The dashboard is very handy.
On Linux, from within the minikube cluster, the host does not have a DNS name. This is available on MacOS
hosts (name is host.docker.internal
). There is a https://github.com/moby/moby/pull/40007[pull request]
to this effect.
You have to use the host IP address 10.0.2.2
instead.
=== Docker context
Before building containers (docker build
) ensure you are in the correct context. Minikube has its
own Docker engine running inside its Virtualbox VM, distinct from that of the laptop host.
.In the local shell: [source,bash]
switch to the Minikube VM context (must be run in each new terminal session)
eval $(minikube docker-env)
switch back to local Docker context
eval $(minikube docker-env -u)
Notes:
- I tried to use Minikube with
--vm-driver=none
, so that it would use the host Docker engine, but it didn't work and probably never will - The Minikube cluster node is visible in the host at
http://192.168.99.100:31645/
(the port is randomly assigned at minikube startup).
=== Kubernetes context
You will be switching between the local and GKE Kubernetes contexts. Ensure you are in the correct
context before firing kubectl apply
commands.
.In the local shell: [source,bash,subs=attributes+]
kubectl config get-contexts
output:
CURRENT NAME CLUSTER
-
gke_{GCP_PROJECT}_{GCP_REGION}_{GKE_CLUSTER} gke_{GCP_PROJECT}_{GCP_REGION}_{GKE_CLUSTER} minikube minikube
To switch to another context:
kubectl config use-context gke_{GCP_PROJECT}{GCP_REGION}{GKE_CLUSTER}
or
kubectl config use-context minikube
== GCP environment
=== GCP project
Set project defaults:
.In the local shell: [source,bash,subs=attributes+]
gcloud config set project {GCP_PROJECT} gcloud config set compute/region {GCP_REGION} gcloud config set compute/zone {GCP_ZONE}
Generate SSH keys, store them locally in ~/.ssh/
.
Copy the keys in your Cloud Shell ~/.ssh/
folder.
=== Cloud Shell
We use Cloud Shell as a build and deployment environment
.In the Cloud shell: [source,bash,subs=attributes+]
set project
gcloud config set project {GCP_PROJECT}
clone the project repo:
git clone [email protected]:{GITHUB_ACCOUNT}/{GITHUB_REPO}.git
install the Deployer vendor libraries
cd {PROJECT} && composer install
To avoid having to reinstall Deployer at every new session, add the following lines to your Cloud Shell customize_environment file:
.~/customize_environment
#!/bin/sh curl -LO https://deployer.org/deployer.phar sudo mv deployer.phar /usr/local/bin/dep sudo chmod +x /usr/local/bin/dep
It is useful to SCP to Cloud Shell. Note that paths must be absolute. Use the --recurse
flag
for recursive copying.
.In the local shell: [source,bash]
To copy a remote directory to your local machine:
gcloud alpha cloud-shell scp
cloudshell:~/REMOTE-DIR
localhost:~/LOCAL-DIR
Conversely:
gcloud alpha cloud-shell scp
localhost:~/LOCAL-DIR
cloudshell:~/REMOTE-DIR
==== Install PHP 7.3
https://computingforgeeks.com/how-to-install-php-7-3-on-ubuntu-18-04-ubuntu-16-04-debian/[See]
.In the Google Cloud shell: [source,bash]
sudo add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt install php7.3 php7.3-cli php7.3-mbstring php7.3-curl php7.3-xml php7.3-zip php7.3-curl
sudo update-alternatives --set php /usr/bin/php7.3
TO DO: replace the PHP CLI install by a Docker container... nicer and cleaner
==== Github SSH key
https://help.github.com/en/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent[See]
Create a new SSH key labelled "{PROJECT}-vm" locally. Register it on Github. Also create a corresponding config file:
.In the local ~/.ssh/config file, append: [source,bash,subs=attributes+]
Host github.com User git Hostname github.com PreferredAuthentications publickey IdentityFile ~/.ssh/{PROJECT}-vm_rsa
==== Composer
Composer is used to manage dependencies for PHP Deployer.
.In the Google Cloud shell: [source,bash,subs=attributes+]
PHP Deployer
cd ~/{PROJECT} composer install
Symfony
cd sf composer install
=== Cloud storage
It is a good practice to create bucket names that are domain names associated to your project, as it guarantees global unicity.
This requires domain ownership verification:
- In the Google Search Console, add Property:
{WEB_URL}
- In your domain name DNS manager, add a TXT record with the provided text.
Create the following buckets:
-
app.{WEB_URL}
: private data, live site -
app-test.{WEB_URL}
: private data, test site -
cdn.{WEB_URL}
: public data, live site -
cdn-test.{WEB_URL}
: public data, test site
Set bucket access control policy to ACLs, since the PHP flysystem API will use a service account to access the Cloud Storage API.
To make buckets public:
.In the local shell: [source,bash,subs=attributes+]
gsutil iam ch allUsers:objectViewer gs://cdn-test.{WEB_URL} gsutil iam ch allUsers:objectViewer gs://cdn.{WEB_URL}
Using Chrome, upload folders/ files using the Cloud Storage console or the gsutil cp
command.
Example of commands to copy files at the command line from local and remote environments:
.In the local shell: [source,bash,subs=attributes+]
local to remote
gsutil cp * gs://cdn.{WEB_URL}/dir
remote to local
gsutil cp gs://cdn.{WEB_URL}/dir/* .
remote to remote
gsutil cp gs://cdn.{WEB_URL}/dir/* gs://cdn-test.{WEB_URL}.org/dir
=== Cloud database
==== Create the database
On the Gcloud SQL Console, create DB instance called {PROJECT}-db
(MySQL 5.7):
Create Database:
- Character set/ Collation =>
utf8mb4/ utf8mb4_unicode_ci
- Connectivity: Private IP
Don't use the recommended collation for MySQL 8.0 utf8mb4_0900_ai_ci
as it is
not supported on MySQL 5.7.
Note down the DB instance external IP address (10.1.2.3
). You will use it in your
.env
configuration files.
==== Test connectivity from VPC
Instantiate temporarily a Compute Engine VM in the same VPC.
Install the mysql CLI: apt-get update && apt-get install -y mysql-client-5.7
.In the VM shell: [source,bash,subs=attributes+]
root user - enter password at prompt
mysql -u root -p -h 10.1.2.3
application user - enter password at prompt
mysql -u {PROJECT}_dev -p -h 10.1.2.3
Note:
- You can't connect from the Cloud Shell, as it is outside the project VPC.
==== Create DB accounts
On the Gcloud SQL Console > Users > Create MySQL user accounts:
-
{PROJECT}_dev
-
{PROJECT}_master
Allow from any host (%)
Apply GRANT commands as required by your application. A typical one would be:
.In the MySQL shell: [source,mysql]
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, EXECUTE,
CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER, LOCK TABLES
ON <DB>
.* TO '<USER>'@'%';
FLUSH PRIVILEGES;
==== Application service account
Create a {PROJECT}-dev
Cloud IAM service account for the application. Add roles:
- Storage > Storage Object Admin
- Cloud SQL > Cloud SQL Client
The service account is named {PROJECT}-dev@{GCP_PROJECT}.iam.gserviceaccount.com
Create a JSON key associated to this account => {GCP_PROJECT}-abcdef123456.json
.
==== Import a database
To import a database, instantiate a temporary VM.
All tables must be in the InnoDB format.
. Export the DB in SQL format with Adminer or PhpMyAdmin . SCP the SQL file to the VM:
.In the local shell: [source,bash,subs=attributes+]
gcloud compute scp
~/{PROJECT}_dev.sql
temp-vm:/tmp
[start=3] . Import the DB:
.In the temporary VM shell: [source,bash,subs=attributes+]
mysql -u root -p -h 10.1.2.3 {PROJECT}_dev < /tmp/{PROJECT}_dev.sql
==== Artifact Registry
The Container Registry is deprecated. Use the Artifact Registry instead.
- In the Console > Artifact Registry, create a repo named
{GCP_PROJECT}
- In Cloud Shell run
gcloud auth configure-docker {GCP_REGION}-docker.pkg.dev
- Assign role "Artifact Registry Writer" to your service account
=== GKE
==== Create the cluster
Create an Autopilot cluster. Workload identity is enabled by default.
==== Bind the service account
Creating the cluster creates automatically:
- A
default
namespace - A Kubernetes service account (
kubectl get serviceaccount --namespace default
). At this stage this service account controls access only within the cluster.
The Kubernetes service account needs to access Google resources, so we bind it to the application Google service account previously created.
.In the local shell: [source,bash,subs=attributes+]
gcloud iam service-accounts add-iam-policy-binding
--role roles/iam.workloadIdentityUser
--member "serviceAccount:{GCP_PROJECT}.svc.id.goog[default/default]"
{GCP_PROJECT}-dev@{GCP_PROJECT}.iam.gserviceaccount.com
Add corresponding annotation to the Kubernetes service account:
.In the local shell: [source,bash,subs=attributes+]
kubectl annotate serviceaccount
--namespace default
default
iam.gke.io/gcp-service-account={GCP_PROJECT}-dev@{GCP_PROJECT}.iam.gserviceaccount.com
Verify that the service account is configured correctly by running a test container provided by Google:
.In the local shell: [source,bash,subs=attributes+]
kubectl run -it
--generator=run-pod/v1
--image google/cloud-sdk
--serviceaccount default
--namespace default
workload-identity-test
.In the container shell: [source,bash]
gcloud auth list
This should display a single Google service account, the one bind'ed earlier. This is the service account the pod will use to access GCP services.
Once done, delete the workload-identity-test
pod.
== Deployment
=== Deploy locally
==== Secrets
Create your secrets for each environment in a safe location outside the project directory.
Naming convention: secrets.<HOST_ENV>.<APP_ENV>.yml
Those are in the form:
./secrets/secrets.local.dev.yml [source,yaml,subs=attributes+]
apiVersion: v1 kind: Secret metadata: name: {PROJECT}-{APP_ENV}-sf-secrets type: Opaque data: # php -r 'echo base64_encode(base64_encode(require "config/secrets/dev/dev.decrypt.private.php"));' SYMFONY_DECRYPTION_SECRET: YWJjZGVmZ2h1aWRVQUlQR0RBWkVQVUlJVUdQ # echo -n 'secret1' | base64 API_KEY: c2VjcmV0MQ== # echo -n 'secret2' | base64 MAILER_PASSWORD: c2VjcmV0Mg==
Deploy Kubernetes secrets locally:
.In the local shell: [source,bash,subs=attributes+]
cd dir
kubectl config use-context minikube
kubectl apply -f secrets.local.dev.yml
kubectl apply -f secrets.local.master.yml
==== Cronjobs
To execute cron jobs we instantiate a very simple Docker Alpine image with the cUrl library.
.In the local shell: [source,bash,subs=attributes+]
cd {PROJECT}
Set Docker and Kubernetes contexts to Minikube
eval $(minikube docker-env)
Build the Docker cronjob image
docker build -f build/Dockerfile.cronjob -t k8s-cronjob:current .
==== Services
We build services manifests with PHP Deployer and deploy them using kubectl. This is usually done only once.
.In the local shell: [source,bash,subs=attributes+]
cd {PROJECT}
Set Docker and Kubernetes contexts to Minikube
kubectl config use-context minikube
Deploy services (dev environment)
php vendor/bin/dep --file=conf/deployer/deploy.php
--hosts=localhost gen-service -o APP_ENV=dev
kubectl apply -f build/service.yml
kubectl apply -f build/cronjob.local.yml
Deploy services (master environment)
php vendor/bin/dep --file=conf/deployer/deploy.php
--hosts=localhost gen-service -o APP_ENV=master
kubectl apply -f build/service.yml
kubectl apply -f build/cronjob.local.yml
==== Releases
The web application container mounts the Symfony working directory. There is no need to rebuild the container, unless, say, you need to add a PHP library. Just switch git branches as needed.
The Minikube web application is visible on the host at: 192.168.99.100:32745
(adapt the port number)
=== Deploy on GKE
We use Cloud Shell to build and deploy on GKE.
==== Pull the repo
.In the Cloud shell: [source,bash,subs=attributes+]
git clone [email protected]:{GITHUB_ACCOUNT}/{GITHUB_REPO}.git cd {PROJECT}
==== Secrets
Deploy Kubernetes secrets. Here those are the same as for the local environment.
.In the local shell: [source,bash,subs=attributes+]
cd your-secrets-dir
Switch to GKE context
kubectl config use-context gke_{GCP_PROJECT}{GCP_REGION}{GKE_CLUSTER} kubectl apply -f secrets.oat.remote.yml kubectl apply -f secrets.prod.remote.yml
==== Cronjobs
Push the Alpine/cUrl image to GCR:
.In the Cloud shell: [source,bash,subs=attributes+]
cd {PROJECT} docker build -f conf/docker/Dockerfile.cronjob -t k8s-cronjob:current . docker tag k8s-cronjob:current gcr.io/{GCP_PROJECT}/k8s-cronjob:current docker push gcr.io/{GCP_PROJECT}/k8s-cronjob:current
==== Services
Deploy services manifests:
.In the Cloud shell: [source,bash,subs=attributes+]
cd {PROJECT}
Switch to GKE context
kubectl config use-context gke_{GCP_PROJECT}{GCP_REGION}{GKE_CLUSTER}
Deploy services (OAT on test.)
php vendor/bin/dep --file=conf/deployer/deploy.php
--hosts=remote gen-service -o APP_ENV=oat
kubectl apply -f build/service.yml
kubectl apply -f build/cronjob.remote.yml
Deploy services (Prod on www.)
php vendor/bin/dep --file=conf/deployer/deploy.php
--hosts=remote gen-service -o APP_ENV=prod
kubectl apply -f build/service.yml
kubectl apply -f build/cronjob.remote.yml
==== Releases
The release process is as follows:
. Git tag a release version . Push the tag to Github . In the Cloud Shell git pull . Execute the Deployer script; this: .. Builds a new container with a Docker tag identical to the git tag .. Applies an updated deployment manifest with the new Docker tag
Commands:
.In the local shell: [source,bash,subs=attributes+]
tag a commit locally
git tag 0.4
push tags
git push origin --tags
.In the Cloud shell: [source,bash,subs=attributes+]
Deploy new version - Test
cd {PROJECT}
php vendor/bin/dep --file=conf/deployer/deploy.php --hosts=remote deploy-remote
-o APP_ENV=dev -o TAG=0.4
Deploy new version - Prod
cd {PROJECT}
php vendor/bin/dep --file=conf/deployer/deploy.php --hosts=remote deploy-remote
-o APP_ENV=master -o TAG=0.4
== References
Some random resources that I found useful. Not all apply to this recipe.
=== Docker
http://docs.blowb.org/index.html[The Blowb Project - Deploy Integrated Apps Using Docker] (Not used in this article but looks like it has many good ideas)
=== Kubernetes
https://codeburst.io/getting-started-with-kubernetes-deploy-a-docker-container-with-kubernetes-in-5-minutes-eb4be0e96370
https://www.mirantis.com/blog/introduction-to-yaml-creating-a-kubernetes-deployment/
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core
https://vsupalov.com/yaml-kubernetes-examples-docs-spec/
https://dzone.com/articles/kubernetes-cron-jobs
https://stackoverflow.com/questions/14155596/how-to-substitute-shell-variables-in-complex-text-files
https://dzone.com/articles/how-i-switched-my-blog-from-ovh-to-google-containe
https://gravitational.com/blog/troubleshooting-kubernetes-networking
https://estl.tech/configuring-https-to-a-web-service-on-google-kubernetes-engine-2d71849520d
https://blog.container-solutions.com/kubernetes-deployment-strategies
https://medium.com/google-cloud/kubernetes-best-practices-8d5cd03446e2
https://stackoverflow.com/questions/22944631/how-to-get-the-ip-address-of-the-docker-host-from-inside-a-docker-container
=== GKE
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
https://cloud.google.com/solutions/using-gcp-services-from-gke
https://medium.com/redpoint/cost-effective-kubernetes-on-google-cloud-61067185ebe8
https://rominirani.com/google-cloud-platform-factors-to-control-your-costs-5a256ed207f1?gi=e90cc7e943c8
https://medium.com/google-cloud/kubernetes-day-one-30a80b5dcb29
=== Symfony
https://medium.com/@galopintitouan/how-to-build-a-scalable-symfony-application-on-kubernetes-30f23bf304e
https://titouangalopin.com/introducing-the-official-flysystem-bundle/
https://itnext.io/scaling-your-symfony-application-and-preparing-it-for-deployment-on-kubernetes-c102bf246a93
https://medium.com/@joeymasip/how-to-create-an-api-with-symfony-4-and-jwt-b2334a8fbec2
https://www.jakelitwicki.com/2015/05/26/a-standard-gitignore-for-symfony-applications/
=== Mailer
https://backbeat.tech/blog/sending-emails-with-symfony/
https://symfony.com/doc/4.4/mailer.html
https://github.com/cmaessen/docker-php-sendmail/blob/master/Dockerfile