kelvin icon indicating copy to clipboard operation
kelvin copied to clipboard

Docker deployment

Open Kobzol opened this issue 1 year ago • 3 comments

We would like to have the option to deploy Kelvin fully inside Docker, ideally with a single command, if possible. We want to have the following services running inside Docker, networked together:

  • The Django backend
  • nginx, which will serve the Django backend
  • Postgres (DB)
  • Redis (cache)
  • A set of Django RQ workers

This corresponds to the architecture described in docs.

Ideally, it should be possible to deploy everything with a single docker-compose.yml file. All configuration (directory/file paths, ports etc.) should ideally be configurable in the docker-compose file, through environment variables loaded from an .env file. You can find an example of that in the existing docker-compose.yml file.

Here is a broad TODO list of things (in almost arbitrary order) that we need to do in order to make this possible:

  • [x] Add nginx to docker-compose.yml
    • [x] Make it possible to map a host directory that contains nginx config
    • [x] Make it possible to map a host directory that contains certificates
  • [x] Make it possible to map a host directory containing persistent data for the Redis instance
  • [x] Make sure that all the services in the docker compose file can talk to each other through the network
  • [x] Build the JS frontend in the Kelvin Dockerfile, to make it available in the Docker image
    • [x] Use a multi-stage build to only include the frontend.js, frontend.css files and the dolos directory in the final Docker image
  • [x] Make it possible to map a host directory containing local_settings.py, which will be used to override configuration for the Django backend running inside of Docker
  • [x] Setup nginx so that it serves the Kelvin Django backend
  • [x] Configure a startup script that will run python3 manage.py migrate everytime the whole Docker deployment starts
  • [ ] Make it possible to start RQ workers inside Docker
    • [ ] Make it possible to run each worker in multiple instances
    • [ ] Workers use Docker internally, configure Docker-in-Docker. An example can be found here, but it needs to be tested if it works, and how it interacts with Docker permissions.

If there is a better way to do this, other than docker-compose, we can also try it. But please no Kubernetes :)

Kobzol avatar Sep 19 '24 10:09 Kobzol

As mentioned on VSB Discord, I am taking this task. (just noticing -> no one works on this in paralell).

JersyJ avatar Sep 19 '24 15:09 JersyJ

* [ ]  Make it possible to start RQ workers inside Docker
  
  * [ ]  Make it possible to run each worker in multiple instances
  * [ ]  Workers use Docker internally, configure Docker-in-Docker. An example can be found [here](https://github.com/mrlvsb/kelvin/blob/1f96ae303fc3c61e76c56e1c076bac0c8940393c/docker-compose.yml), but it needs to be tested if it works, and how it interacts with Docker permissions.

I am thinking about the possible solutions:

1. DinD with Sysbox Runtime:

A classic Docker-in-Docker (DinD) approach with a secure implementation.

Pros:

  • Full isolation: Containers can run their own Docker daemon, offering strong sandboxing.

Cons:

  • Docker inside container has it own local image repository. Whenever the containers run in current setup, we would need to: build all the images within the container or pass them as tar file or have local docker repository (in docker compose) or publish the images on GHCR (GitHub Container Registry).

2. DooD (Docker out of Docker):

Here, we run the Docker CLI inside the container, but the daemon remains on the host. We would be creating sibling containers, unlike the child containers in solution 1.

Pros:

  • Simpler, container only need access to the host's Docker socket, so they avoid the overhead of running another Docker daemon. Images and caching can be shared with the host directly.

Cons:

  • Mount paths: if the container running the Docker CLI creates a container with a bind mount, the mount path must be relative to the host (as otherwise the host Docker daemon on the host won’t be able to perform the mount correctly).
  • Potential security risk:if the container has access to the host’s Docker socket, it can potentially gain root access to the host. However, this is already the case in the current setup.

@Kobzol any opinion?

JersyJ avatar Sep 21 '24 23:09 JersyJ

Sorry, I didn't have time to look into this yet. First we need to get the Docker change merged, then somehow deploy the Docker version on a new server and then we can start looking into DinD.

At a glance, I would probably choose DooD, to avoid complexity with managing the local images or rebuilding them all the time.

Kobzol avatar Oct 04 '24 07:10 Kobzol