libretime icon indicating copy to clipboard operation
libretime copied to clipboard

feat: add Dockerfiles for applications

Open paddatrapper opened this issue 4 years ago • 25 comments

Description

Adds Dockerfiles and documentation around them

  • [x] Shared
  • [x] API client
  • [x] Analyzer
  • [x] API
  • [x] Legacy
  • [x] Worker
  • [x] Playout
  • [x] Liquidsoap
  • [ ] Documentation
  • [x] CI push to Docker image repos

Fixes: #949 Fixes: #551 Fixes: #624

This is a new feature: yes

Do the changes in this PR implement a new feature?

I have updated the documentation to reflect these changes: TODO. Needs deployment and dev notes

Testing Notes

What I did:

Built and tested Docker images

How you can replicate my testing:

Follow the Docker build documentation (TODO)

paddatrapper avatar Jan 03 '22 07:01 paddatrapper

Building individual docker images is tricky without #1504 because the build context is each app. So inter-dependencies (e.g. on api_client) are not supported. They can only be installed from the web and not from the local repo, making pypi the logical choice. It does mean that changes to shared/api_client require a version bump before the changes can be used in any application. This is because building will always use the Pypi version, so new changes need pushing there before being used.

For example, this blocks building a Docker image for playout, as it requires the api_client package to be installed to run

paddatrapper avatar Jan 12 '22 08:01 paddatrapper

I would recommend importing a wheel file from a build stage over fetching packages from pypi.

I used to release some docker images like this, but by the time the pip package was available on pypi, the published docker image was using the previous version. Also this would not allow use to build images that haven't been published yet. For instance having a image from master for the testing/demo feature would be infeasible.

What I propose is to create a api_client and shared dockerfile only with a build stage, run those before we build the apps, and import the wheels from there.

May I push some ideas to this branch ?

jooola avatar Jan 12 '22 15:01 jooola

Pushed a extra commit with some idea, feel free to prune the commit if you don't want it.

jooola avatar Jan 12 '22 17:01 jooola

TODO: update ADD to use main instead of master

paddatrapper avatar Jan 13 '22 16:01 paddatrapper

I rebased this branch.

jooola avatar Jan 22 '22 22:01 jooola

Rebased the branch onto main.

jooola avatar Feb 25 '22 18:02 jooola

@jooola please make autogenerating requirements a seperate PR

paddatrapper avatar Feb 28 '22 17:02 paddatrapper

Rebased on the main branch.

@paddatrapper Please tell if I should have my own branch if I am messing with your work.

jooola avatar Mar 04 '22 22:03 jooola

No, it's fine. I want to go through and clean this up hopefully this week so it is ready for testing

paddatrapper avatar Mar 07 '22 07:03 paddatrapper

Where do we want to publish docker images? GitHub container repo and Docker Hub?

paddatrapper avatar Apr 11 '22 12:04 paddatrapper

I lean towards ghcr

jooola avatar Apr 11 '22 13:04 jooola

My thought was push to both

paddatrapper avatar Apr 11 '22 13:04 paddatrapper

The web installer currently calls psql #1642 and libretime-api https://github.com/libretime/libretime/pull/1644#issuecomment-1099049028 to run some migrations.

We will need to handle those to make the docker images work.

jooola avatar Apr 14 '22 18:04 jooola

Blocked by #1758

paddatrapper avatar Apr 17 '22 07:04 paddatrapper

I don't necessarily see #1758 as a blocker, but more like the next required step to be able to do advanced deployment.

If the docker images are run behind a reverse proxy, using the public_url and the proxy should be working just fine. We are still limited on more complex context or like you described in the issue, the reverse proxy charges the traffic.

jooola avatar Apr 20 '22 17:04 jooola

It means that the docker set up can't run on anything except where it has a resolvable domain and correct port-forwarding to the nginx proxy container. It breaks even running it locally, which makes testing and developing this difficult. The intermediate fix for #1758 is fairly simple and allows tuis to be much more flexible while we implement the permanent fix there

paddatrapper avatar Apr 20 '22 17:04 paddatrapper

#1758 seem complex in my opinion, I already commented on the fact that we don't know when to use the internal/public url as the api endpoints are shared for both internal and public calls.

I haven't tried to run the different services using docker-compose yet. I should do that first maybe I will have a better understanding.

jooola avatar Apr 20 '22 18:04 jooola

For the documentation, I propose to add tabs everywhere we can find systemd/systemctl commands to provide a docker equivalent (where meaningful). I think we can assume the docker documentation will be for basic docker-compose setup, advanced used probably know what they are doing if they play with k8s or similar.

jooola avatar Jun 05 '22 17:06 jooola

Rebased this branch, setup should be easier with the new tooling (migrate + storage path config + ...)

jooola avatar Jun 09 '22 14:06 jooola

With a little hack we might be able to merge this before 3.0.0.

Since we can override the public url using env variables, we can override the public url for the playout/liquidsoap containers by adding a LIBRETIME_GENERAL_PUBLIC_URL en variable with the proper host + port in the docker-compose file.

The only problem is the generated urls on the legacy side that are pushed to the analyzer for example. But thats not a huge problem.

@paddatrapper Should we try this ?

The docker setup will be marked as experimental as we should solve this properly in the end.

jooola avatar Jul 21 '22 20:07 jooola

Codecov Report

Merging #1471 (c2cb995) into main (45a131b) will not change coverage. The diff coverage is n/a.

:exclamation: Current head c2cb995 differs from pull request most recent head 5c5459d. Consider uploading reports for the commit 5c5459d to get more accurate results

@@           Coverage Diff           @@
##             main    #1471   +/-   ##
=======================================
  Coverage   59.46%   59.46%           
=======================================
  Files         186      186           
  Lines        4685     4685           
=======================================
  Hits         2786     2786           
  Misses       1899     1899           
Flag Coverage Δ
analyzer 46.72% <ø> (ø)
api 93.88% <ø> (ø)
api-client 40.80% <ø> (ø)
playout 20.20% <ø> (ø)
shared 92.74% <ø> (ø)
worker 93.10% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

codecov[bot] avatar Jul 21 '22 20:07 codecov[bot]

Yes, that is a very good idea

paddatrapper avatar Jul 22 '22 10:07 paddatrapper

Pushed a few changes:

  • improved the initial docker-compose spec with healthcheck, updated/missing volumes, formatting and cleanup
  • added a single multi-stage dockerfile to build all the different conatiners (Note that some improvements in the multi-stage dockerfile are not in the initial splitter dockerfiles, so if we drop the multi-stage idea, we should update the initial dockerfiles)
  • added a ci job that build from this multi-stage dockerfile and tags the containers properly.
  • temporary fix for liquidsoap not binding to 0.0.0.0 by default, this should not be merged as we should make this configurable.

Let me know if you agree with this multi-stage dockerfile, so we can prune the previous dockerfiles.

In addition, I was wondering what names we should use for the containers, between libretime/libretime-$APP and libretime/$APP.

Having libretime/libretime-$APP would be more explicit when browsing the org packages, but if we push them to docker hub, maybe that's unnecessary. Don't if this might conflict with future containers that we build (icecast for example ?).

We already use ghcr.io/libretime/libretime-dev for the testing image.

jooola avatar Jul 26 '22 16:07 jooola

We will need both a production and a dev docker-compose stack.

Should we have a the main docker-compose.yml file as dev or prod spec ? In other words: should we have an extra docker-compose.dev.yml or docker-compose.prod.yml ?

I think we will add stuff for development such as the postgres and rabbitmq ports, or mounting local dirs for live code reloading so maybe the dev stack is the extra spec. But not having to specify the multiple files every time we run the docker-compose command would make our life easier.

Though we might be able to use the COMPOSE_FILE env var and a .env file for development, and specify all the compose files in there.

So maybe:

  • docker-compose.yml => prod
  • docker-compose.dev.yml => dev

@paddatrapper Thoughts about this ?

jooola avatar Jul 31 '22 20:07 jooola

I think that is the way we should go. So have docker-compose.dev.yml for development

paddatrapper avatar Aug 01 '22 14:08 paddatrapper

@paddatrapper What do you think of merging this as WIP ? While the feature should not be used by the public yet I see this very useful for testing and developing ?

I am still working on the dev setup, so the pr is not ready as is.

I could setup other ways of testing inside docker though, should be doable with make and the libretime-dev images.

jooola avatar Aug 13 '22 13:08 jooola

Yeah, I think we can merge this as a WIP. Others can get in on finalising it then too

paddatrapper avatar Aug 13 '22 19:08 paddatrapper

I was wondering: for production deployment, we only want to copy the compose file, and few extra files (libretime config + nginx config) ? We don't want to clone the whole repo ?

wget https://raw.githubusercontent.com/libretime/libretime/main/docker-compose.yml
wget https://raw.githubusercontent.com/libretime/libretime/main/docker/.env.sample
wget https://raw.githubusercontent.com/libretime/libretime/main/docker/nginx.conf
wget https://raw.githubusercontent.com/libretime/libretime/main/installer/config.yml

docker-compose up -d

jooola avatar Sep 06 '22 17:09 jooola

Maybe. I think for prod people will build their own docker-compose scripts or use other orchestration platforms anyway

paddatrapper avatar Sep 06 '22 18:09 paddatrapper

@paddatrapper I think we are almost done here. I would like to have your input on the changes.

Do you think it's ok if we ship a pip editable install in production ? I don't see any real problem doing so and it should greatly help the dev setup by simply mounting the source files.

I think we can start by publishing internal/dev/main images to gchr.io and once we are confident, we can publish tagged releases to docker hub. I would use ghcr.io for testing and branches images.

At some point we might want to install Make in the python images, so we can run tests inside the docker setup, but that's for future improvements.

As conventions I tried to do the following for the python apps (the legacy image is appart as it is php-fpm):

  • Config is in /etc/libretime
  • Storage is in /srv/libretime
  • Source code is in /src
  • Working directory (which means runtime files) are in /app

The legacy image have everything under /var/www/html and the nginx document root is at /var/www/html/public.

We have an .env file used to configure the docker-compose stack, if none is present, every variables has production defaults.

The production setup expect to have all the required files next to the docker-compose.yml file, while the dev setup get the required files from the docker/ folder.

See the docker-compose.override.yml head for the dev setup documentation.

I haven't encountered #2089 yet, and I have a working stream.

What is missing is proper documentation.

An annoying thing is that we have to maintain 2 config files with only some fields changed.

jooola avatar Sep 07 '22 10:09 jooola