cookiecutter-django icon indicating copy to clipboard operation
cookiecutter-django copied to clipboard

Favor Docker Compose 3.x+ secrets feature over .env files

Open webyneter opened this issue 7 years ago • 14 comments

This is based on the proposal by @japrogramer:

I know this thread is about making .env file more clear for database settings .. but I don't think there should be a .env file at all for docker deployments. Hear me out. If a user really needs to set an enviorment variable it can be done from a compose file. Now for sensitive stuff, I believe that stuff doesn't even belong in environment variables the reason for this believe: too often are these settings logged out to file and than you have your secrets scattered everywhere.. now you have too worry about sanitizing your logs and even worse securing your logs. These is my solution, I have gone and removed the .env file, and only rely on docker secrets.The file, which is encrypted, that i read in as a secret is structured like a json object so i can easily parse in python. Each container has a different set of secrets and they only see what they need to see. I do not have the secrets stored in plain txt anywhere instead I archive the swarm dir in docker which stores encrypted secrets.

#on a manager node
$docker secret create my_secret -
$ tee -
#mind you, my secrets are mounted into my containers from within their respective compose file
$systemctl stop docker.service 
$tar czpvf swarm-dev.tar.gz /var/lib/docker/swarm/

than i rsync that file over an encrypted pipe back home and when i need to restore my secrets i do

$systemctl stop docker.service 
$rm -fR /var/lib/docker/swarm
$tar xzpvf swarm-dev.tar.gz -C /var/lib/docker/swarm/
$systemctl start docker.service 
$docker swarm init --force-new-cluster
$ tee -
#Now all my secrets are back, and available to my swarm. But only to those containers that have been given permission on a per secret basis.

To my mind, there are a few things worth mentioning here.

  • [ ] Both the local.yml and production.yml Docker Compose configs would need to be upgraded to version: 3.x:
    • [ ] as of now, an as-is upgrade is possible since we don't use any features not supported in 3.x (take extends for example as implied by Version 3 < 3.3 and Version 3.3 specs);
    • [ ] according to Compose and Docker compatibility matrix, we would need to switch to
    • [ ] which would require us to
      • upgrade our local host Docker environments;
      • upgrade cookiecutter-django-related remote environments (Travis CI );
      • inform cookiecutter-django community of this change, documenting minimal local host environment Docker stack version matrix.
  • [ ] Refactor every use of *.env* files should Docker be the client's environment of preference;
  • [ ] Ensure consistency and stability for Docker-powered deployments we support (Heroku + currently experimental ELB).

@jayfk @luzfcb @japrogramer what are your thoughts?

webyneter avatar Jul 30 '17 15:07 webyneter

@japrogramer have you production-tested this?

#on a manager node
$docker secret create my_secret -
$ tee -
#mind you, my secrets are mounted into my containers from within their respective compose file
$systemctl stop docker.service 
$tar czpvf swarm-dev.tar.gz /var/lib/docker/swarm/

than i rsync that file over an encrypted pipe back home and when i need to restore my secrets i do

$systemctl stop docker.service 
$rm -fR /var/lib/docker/swarm
$tar xzpvf swarm-dev.tar.gz -C /var/lib/docker/swarm/
$systemctl start docker.service 
$docker swarm init --force-new-cluster
$ tee -
#Now all my secrets are back, and available to my swarm. But only to those containers that have been given permission on a per secret basis.

webyneter avatar Jul 30 '17 15:07 webyneter

@webyneter I have tested backing up the swarm state in production, some things to note tho

  • In order to capture the state of the swarm, the docker service must be stopped on a manager node with the services running. The reason we want to stop the daemon is so that no changes happen in the directory while we are making the backup of the swarm.

  • the tar czpfv command will capture the entire state of the swarm. Meaning that we can create versioned backups of the state of the swarm.

  • To restore to a previous state the docker daemon must not be running when we replace the swarm directory with our back up.

  • The --force-new-cluster, is nice because if other manager nodes are running when we tell our new manager to recreate the swarm state from our backup it will force the services to redeploy from the saved state automatically, so that clients experience very little down time. This is useful in other situations, like say a new deploy has some bugs, to go back to the previous deployment just restore the swarm state. This will work as long as all the appropriate resources are still available, images etc.

    • One warning: if the swarm snapshot captures a bug, than if restored said bug will be reintroduced. Therefore only snapshots of working states should be made.

. I have more too say on this topic. Too address the docker yml files, using version: '3.2', I have restructured the layout to a base.yml, local.yml and a production.yml. Similar to python's requirements.txt file layout all the common settings are stored in base.yml and only new or different settings are placed in dev.yml or production.yml. I have accomplished this with the command:

# for dev
docker-compose -f base.yml -f dev.yml config > stack.yml
# for production
docker-compose -f base.yml -f production.yml config > stack.yml

and now too launch the app the command would be

docker stack deploy --compose-file=stack.yml website

Too give you an idea of how this yaml layout would look this is how my base.yml file looks

version: '3.2'                                                       
services:                                                                     
  postgres:                                                                   
    build: ./compose/postgres                                                                                          
    environment:                                                              
      - POSTGRES_USER_FILE=/run/secrets/pg_username                           
      - POSTGRES_PASSWORD_FILE=/run/secrets/pg_password                       
    secrets:                                                                  
      - pg_username                                                           
      - pg_password                                                           
                                                                              
  django:                                                                                                                                                       
    command: /gunicorn.sh                                                     
    environment:                                                              
      - USE_DOCKER=$DAPI_VAR:-yes                                             
      - DATABASE_URL=postgres://{username}:{password}@postgres:5432/{username}
      - SECRETS_FILE=/run/secrets/django_s                                    
      - POSTGRES_USER_FILE=/run/secrets/pg_username                           
      - POSTGRES_PASSWORD_FILE=/run/secrets/pg_password                       
    # My Deploy                                                               
    deploy:                                                                   
          replicas: 1                                                         
          restart_policy:                                                     
            condition: on-failure                                             
    secrets:                                                                  
      - pg_username                                                           
      - pg_password                                                           
      - django_s                                                              
                                                                              
secrets:                                                                      
    django_s:                                                                 
        external: True                                                        
    pg_username:                                                              
        external: True                                                        
    pg_password:                                                              
        external: True                                                        

and this is how dev.yml looks like, note that

version: '3.2'

volumes:
  postgres_data_dev: {}
  postgres_backup_dev: {}

services:
  postgres:
    image: apple_postgres
    volumes:
      - postgres_data_dev:/var/lib/postgresql/data
      - postgres_backup_dev:/backups

  django:
    image: apple_django
    build:
      context: .
      dockerfile: ./compose/django/Dockerfile-dev
    command: /start-dev.sh
    volumes:
      - .:/app
    ports:
      - "8000:8000"
    secrets:
      - pg_username
      - pg_password
      - source: django_s
        #target: /app//.env

  node:
    image: apple_node
    #user: $USER:-0
    build:
      context: .
      dockerfile: ./compose/node/Dockerfile-dev
    volumes:
      - .:/app
      - ${PWD}/gulpfile.js:/app/gulpfile.js
      # http://jdlm.info/articles/2016/03/06/lessons-building-node-app-docker.html
      - /app/node_modules
      - /app/vendor
    command: "gulp"
    ports:
      # BrowserSync port.
      - "3000:3000"
      # BrowserSync UI port.
      - "3001:3001"

I would also like to point out that because depends_on is ignored in swarm deploy i don't use it. Instead my containers listens in their entrypoint script for the container they depend on to become available. I do this with a simple ping service_name

Because I do all of my development with my images launched to a one node swarm, when i want to run tests or test coverage I go to my stack.yml file and change the line in the django service that reads command: /start-dev.sh to read either

command: pytest
# or
command: bash -c "coverage run manage.py test . && coverage report -m"

and every time i want to run a test I only have to run this command, I have an alias for this.

docker stack deploy --compose-file=stack.yml website

and in a split terminal window i have this command runnig

docker service logs -f website_django

Also whenever i want to rebuild a specific image for a service I do something along this lines.

docker-compose -f stack.yml build --no-cache django

japrogramer avatar Jul 31 '17 19:07 japrogramer

Here is how I read in my secrets that i structure in a json format in my config/settings/base.py

with open(env('SECRETS_FILE')) as sec: 
     Secrets = json.loads(sec.read()) 

Than for example.

ACCOUNT_ALLOW_REGISTRATION = Secrets.get('DJANGO_ACCOUNT_ALLOW_REGISTRATION', True)

japrogramer avatar Jul 31 '17 20:07 japrogramer

Big news, and proof of vulnerability, Recently multiple packages where caught stealing environment variables. https://iamakulov.com/notes/npm-malicious-packages/

japrogramer avatar Aug 05 '17 01:08 japrogramer

I don't think this should be the default for Cookiecutter Django because it adds complexity and another thing you have to care about.

This is an advanced topic we should mention in the docs for people to take a look at and maybe add a couple of examples.

jayfk avatar Aug 11 '17 09:08 jayfk

@jayfk I agree. Let's leave the issue open for further elaboration. I want to explore this one after I'm done with my ongoing commitments to #1052 and #1205.

webyneter avatar Aug 11 '17 09:08 webyneter

More proof of vulnerability of stolen environment variables and more https://www.reddit.com/r/linux/comments/709a4t/pypi_compromised_by_fake_software_packages/ direct link http://www.nbu.gov.sk/skcsirt-sa-20170909-pypi/

japrogramer avatar Sep 16 '17 00:09 japrogramer

I have a question, is it really popular to have django as something other than an api backend? Why not split the frontend to a different docker service and leave django in the back.

orange-tsai avatar Apr 08 '18 03:04 orange-tsai

I have a question, is it really popular to have django as something other than an api backend?

I have an answer: yes.

jayfk avatar Apr 08 '18 06:04 jayfk

The only problem I see with using secrets instead of an env file, is ... can you even use docker secrets without swarm? https://serverfault.com/questions/871090/how-to-use-docker-secrets-without-a-swarm-cluster says you can't. Also, what if someone wanted to use kubernetes instead of swarm? I'm also running into the issue of how to correctly pass environment variables to Travis. Should there be .local .production .unittest ? Should local be part of the GH repo?

orange-tsai avatar Jun 05 '18 19:06 orange-tsai

@global2alex I made .envs/.local/* commitable to VCS for local environment reproducibility: local envs, to my mind, should be no secret to your fellow teammates.

webyneter avatar Jun 05 '18 19:06 webyneter

@webyneter Yeah I was thinking that too, I can't find a reason why it wouldn't be ok to share the local envs in a git repo, as long as the production ones are different. Thank you!

orange-tsai avatar Jun 05 '18 21:06 orange-tsai

  - DATABASE_URL=postgres://{username}:{password}@postgres:5432/{username}

@japrogramer

Hi, thanks to you I find this open issue.

I see your docker-compose.yml use Docker Secrets for the postgres and django's services but leave the DATABASE_URL environment variable unencrypted.

Big news, and proof of vulnerability, Recently multiple packages where caught stealing environment variables. https://iamakulov.com/notes/npm-malicious-packages/

Do you find a way to use DATABASE_URL environment with Docker Secrets?

What am I thinking right now are:

DATABASE_URL=postgres://"/run/secrets/dbusername":"/run/secrets/dbpass"@postgres:5432/"/run/secrets/dbname"

Thank you.

jasonrichdarmawan avatar May 26 '20 19:05 jasonrichdarmawan

Compose now supports secrets too and I think it would be a whole lot better to use secrets for things like AWS Access Keys and Database passwords etc.

arnav13081994 avatar Sep 22 '20 10:09 arnav13081994