Update production setup
- Updated Python (3.12) & Postgres (15) versions
- Updated CI
- Removed unused container
indico-static - Updated/fixed some config files
- Updated readme/added comments
- Added indico plugins to the
getindico/indicoimage - Config is loaded directly from
indico.confrather than an env file (specified as a mount volume)
Haven't touched the openshift configs, but they will probably need updating as well.
Haven't touched the openshift configs, but they will probably need updating as well.
I feel like just getting rid of them in a separate PR... I don't think anyone is using them anyway.
We should probably bump this to Python 3.12 as well
indeed, good idea!
Any plans on finalizing and merging this PR? I was looking for a compose-setup ready for production and this repository explicitly states it is not meant for production. A forum post led me to this PR, seems to work fine as long as you build the image locally and do not use the image provided on Docker hub.
Nevertheless, a few questions remain:
- For a full Indico backup I found the recommendation to backup the postgres database and /opt/indico/archive. This compose file creates several more volumes (e.g. redis, customization) which hints at that these volumes should be backed up as well? Static-files appears to be only for mounting into nginx, which I do not need as Caddy is running as a reverse proxy in front.
- How about upgrading? The upgrade documentation mentions running
indico db upgradeandindico db --all-plugins upgradeafter installation of the new version (seems like Indico itself could be kept running, only Celery worker should be stopped). But for the containerized version the new Indico instance would already have to be started before you could run the upgrade commands inside the container. Could this lead to problems on upgrading a running, new Indico instance or should the upgrade commands go intorun_indico.shbefore every startup? Running them when nothing needs to change does no harm according to documentation (with the exception of increasing startup time).
Any plans on finalizing and merging this PR? I was looking for a compose-setup ready for production and this repository explicitly states it is not meant for production. A forum post led me to this PR, seems to work fine as long as you build the image locally and do not use the image provided on Docker hub.
Nevertheless, a few questions remain:
* For a full Indico backup I found the recommendation to backup the postgres database and /opt/indico/archive. This compose file creates several more volumes (e.g. redis, customization) which hints at that these volumes should be backed up as well? Static-files appears to be only for mounting into nginx, which I do not need as Caddy is running as a reverse proxy in front. * How about upgrading? The upgrade documentation mentions running `indico db upgrade` and `indico db --all-plugins upgrade` after installation of the new version (seems like Indico itself could be kept running, only Celery worker should be stopped). But for the containerized version the new Indico instance would already have to be started _before_ you could run the upgrade commands inside the container. Could this lead to problems on upgrading a running, new Indico instance or should the upgrade commands go into `run_indico.sh` before every startup? Running them when nothing needs to change does no harm according to documentation (with the exception of increasing startup time).
Apologies for the late reply
not meant for production in the sense that at CERN we use a different setup (regular VMs) so we can't guarantee that this setup will work out of the box. However, there have been many requests for a dockerized Indico setup so we wanted to provide this as an example for others to build on.
Static files come from the indico wheel so no need to back them up. Customization is provided by whoever is running the instance so it's presumably already backed up in GitHub/GitLab.. We only use redis for caching so in principle it might not need backing up either.
As for upgrading, I think it makes sense to include the upgrade commands in run_indico.sh @ThiefMaster What do you think?
I'm not the biggest fan of running DB upgrades automatically on container startup, since it'd probably result in deadlocks when you run the container multiple times.
However, we could probably run it only in a single container (e.g. the one running celery beat, since that one must run only once anyway). For my own production setup I'd still prefer to explicitly execute DB updates, but I guess for a "simple" dockerized setup automating it is what people expect...
I'm not the biggest fan of running DB upgrades automatically on container startup, since it'd probably result in deadlocks when you run the container multiple times.
However, we could probably run it only in a single container (e.g. the one running celery beat, since that one must run only once anyway). For my own production setup I'd still prefer to explicitly execute DB updates, but I guess for a "simple" dockerized setup automating it is what people expect...
Thanks for getting back and answering the questions. Both manual and automatic DB upgrades would be fine for me. I was just wondering how to do this without starting Indico itself. Starting the celery beat container only, execing into the container and performing manual upgrades there, is fine. I did not realize that this is in fact the same source and here the upgrade progress has to deviate a bit from the official documentation for a dedicated machine.
Another difference between the Dockerfile and the official install documentation: In the containerized setup there are way less packages installed into the image on build. Nevertheless, I did not find any functionality missing yet.
@tomasr8 Been trying out your PRs setup and it's been working pretty well but every time I go docker compose down && docker compose up -d the data in the instance gets wiped. I don't know if this is its intended behaviour, but is unusual for Docker Compose setups with persistent data. Doing docker compose restart or docker compose stop && docker compose start doesn't yield that issue, though.
@tomasr8 Been trying out your PRs setup and it's been working pretty well but every time I go
docker compose down && docker compose up -dthe data in the instance gets wiped. I don't know if this is its intended behaviour, but is unusual for Docker Compose setups with persistent data. Doingdocker compose restartordocker compose stop && docker compose startdoesn't yield that issue, though.
Thanks for the report! This was most likely because we were using an anonymous volume for the Postgres data. I switched to a named volume so the data should be persisted now :)
I´d be glad to contribute towards this, but I lack the understanding of what the OpenShift layer does. Besides, is it basically going through all the steps detailed at https://docs.getindico.io/en/stable/installation/production/deb/nginx/ ? What could be done to feature the SSL layer?
Hi @Kehino! You could help us by testing that this setup works for you as is, or if there's something missing :)
As for TLS, we don't include that here because that is going to largely depend on where you deploy Indico. If you go with the same nginx setup as in this PR, you should configure TLS in your nginx.conf as shown in the docs page you linked.
The setup worked well locally for me! Any specific functionality to test? E.g. I didn't try the PDF generation from LaTeX.
I'll test a setup with my nginx reverse proxy and SSL.
The setup worked well locally for me! Any specific functionality to test? E.g. I didn't try the PDF generation from LaTeX.
I'll test a setup with my nginx reverse proxy and SSL.
Awesome! We're also interested to know if this setup works and is easy to deploy in a production setting i.e. on OpenShift/k8s so if you have a way to test that, we'd love to know :)