Docker image: docker startup does not clear application cache
Bug Description
When updating linkace docker image (with a docker volume mounted on /app/storage), the docker image startup process does not clear application cache.
As a result, the linkace version displayed at the bottom of linkace pages stays the same and keep alerting that an update is needed.
I think this is the least of the possible issues.
How to reproduce
- install linkace 2.1.9 using docker with a docker volume mounted on
/app/storage. - browse the site a little as a logged in user.
- update the image to 2.2.0 4.browse the site as a logged in user. The bottom of the pages stays at 2.1.9.
Expected behavior
The docker image clear the cache at startup with php artisan cache:clear.
Logs
Screenshots
No response
LinkAce version
v2.2.0
Setup Method
Docker
Operating System
Linux (Ubuntu, CentOS,...)
Client details
No response
I have experimented with different automated actions directly in the container, but came to the conclusion that those make make problems than they solve. Currently, the update process is defined and documented, including the clearing of the cache that must happen after running any database migrations. Therefore, I won't change the process at the moment.
If anyone is willing to spend time on it and finds a suitable solution, feel free to open a pull request.
Watchtower can run an upgrade script after automatically updating a container. I don't know about laravel, but most symfony project I've seen run migrations and cache warmup at container startup.
I'll look into it when I have the time. A possible way would be to compare the software version versus the version in the cache contents to trigger the cache clearing. This prevent issue when the cache is shared between several containers.
Hello,
I've worked on a container post-update script for LinkAce container.
I bind mount an executable file with these contents on /usr/local/bin/linkace-post-update
#!/usr/bin/env ash
# /usr/local/bin/linkace-post-update
# Upgrade Linkace installation
set -o errexit
set -o nounset
set -o pipefail
if [[ "$(whoami)" != 'www-data' ]]; then
echo "Error: this upgrade script must run as 'www-data' user." >&2
exit 1
fi
if ! type -f 'php' >'/dev/null'; then
echo "Error: php command not found." >&2
exit 1
fi
if [[ ! -e '/app/artisan' ]]; then
echo "Error: artisan command not found" >&2
exit 1
fi
cd '/app'
# Migrate database
php artisan migrate --no-interaction --isolated --step --force
# Clear all caches
php artisan optimize:clear --no-interaction
php artisan permission:cache-reset --no-interaction
php artisan settings:clear-cache --no-interaction
# Rebuild all caches
php artisan settings:discover --no-interaction
php artisan optimize --no-interaction
php artisan view:cache --no-interaction
The options on the migrate command should prevent issues if LinkAce container is hosted on a cluster (--isolated).
As far as I can see, there should be no issue running these commands at container startup.
And I add these labels to LinkAce service for Watchtower:
services:
linkace:
labels:
com.centurylinklabs.watchtower.enable: "true"
com.centurylinklabs.watchtower.lifecycle.post-update: "/usr/local/bin/linkace-post-update"
Tip: I also add these labels for running cronjobs with Ofelia:
services:
linkace:
labels:
ofelia.enabled: "true"
ofelia.job-exec.linkace-linkace-cronjob.schedule: '@every 1m'
ofelia.job-exec.linkace-linkace-cronjob.user: www-data
ofelia.job-exec.linkace-linkace-cronjob.command: php artisan schedule:run
Thank you for working on this. This definitely can be added to LinkAce directly. Will have a look later.