crontab-ui
crontab-ui copied to clipboard
Docker: run commands on host?
I found a version that runs on pi: donaldrich/crontab-ui
But when I add a simple command like: sudo apt-get update
, or apt-get update
I get errors saying sudo and apt-get don't exist, so my assumption is it is only running these commands inside the container which is kinda useless I think; is there a way to run these commands on the host?
You can try this: https://github.com/alseambusher/crontab-ui/issues/128#issuecomment-656668261
I'm working on this too.
- I have crontab-ui saving to my host
/etc/cron.d
- I've observed the supervisor in the docker image is running its own
crond
- My host is not picking up the crontab change - they don't seem to conform to vixie cron (missing a user column). The name of the file I get (root) tells me it's to be placed in my
/var/spool/cron/crontabs/
- but these files are not meant to be edited directly.- Also, the permissions on the file seem to not match what I have on ubuntu.. The group should be
crontab
, and not whatever user the crontab-ui is running as. Not that I think it matters - the perms don't grant groups any extras privileges..
- Also, the permissions on the file seem to not match what I have on ubuntu.. The group should be
So to make this work for the host, do I:
- Mount
/var/spool/cron/crontabs/
- Make my own image, that disables the crond.. is there an env var?
- Make my own supervisor config and volume' it in?
Updates:
Curiously. Editing my "root" crontab file seems to work.. cron picked it up.
I mounted /var/spool/cron/crontabs/
in my container, wrote the crontab with crontab-ui and then stopped the container (to ensure only host cron runs). It failed, as /etc/crontabs
didn't exist (for logging), so all output was ditched. Created that, and then cron ran my command successfully. I found the env var CRON_PATHS
controls this path, so I've adjusted that to /var/spool...
In the end, I have:
- Docker container controls host crontab for root (user crontab-ui runs)
- Host cron runs the crontab and outputs to a directory mounted in crontab-ui
- Crontab-ui can see my logs
docker-compose.yml
version: '2.3'
services:
crontabui:
image: alseambusher/crontab-ui
environment:
CRON_PATH: /var/spool/cron/crontabs
CRON_DB_PATH: /mnt/systems/crontabui/data
ports:
- 192.168.1.10:8811:8000
volumes:
- ./supervisord.conf:/etc/supervisord.conf
# Must be identical in host os
- /var/spool/cron/crontabs:/var/spool/cron/crontabs
- /mnt/systems/crontabui/data:/mnt/systems/crontabui/data
supervisord.conf
[supervisord]
nodaemon=true
#[program:crontab]
#command=crond -l 2 -f -c %(ENV_CRON_PATH)s
#stderr_logfile = /var/log/crontab-stderr.log
#stdout_logfile = /var/log/crontab-stdout.log
[program:crontabui]
command=node /crontab-ui/app.js
stderr_logfile = /var/log/crontabui-stderr.log
stdout_logfile = /var/log/crontabui-stdout.log
@LordMike TY for posting this to help out the Linux noobs (like me, lol).
I'm just now getting back to tackling this again, and tried your code almost verbatim, but this is what I'm getting when I try to start the container:
2021-03-29 22:11:46,641 INFO Set uid to user 0 succeeded
2021-03-29 22:11:46,642 INFO supervisord started with pid 1
2021-03-29 22:11:47,656 INFO spawned: 'crontabui' with pid 8
2021-03-29 22:11:48,025 INFO exited: crontabui (exit status 1; not expected)
2021-03-29 22:11:49,030 INFO spawned: 'crontabui' with pid 19
2021-03-29 22:11:49,415 INFO exited: crontabui (exit status 1; not expected)
2021-03-29 22:11:51,437 INFO spawned: 'crontabui' with pid 30
2021-03-29 22:11:51,790 INFO exited: crontabui (exit status 1; not expected)
2021-03-29 22:11:54,825 INFO spawned: 'crontabui' with pid 41
2021-03-29 22:11:55,216 INFO exited: crontabui (exit status 1; not expected)
2021-03-29 22:11:56,219 INFO gave up: crontabui entered FATAL state, too many start retries too quickly
Here's my current stuff:
Compose:
version: "3.8"
services:
crontabui:
image: alseambusher/crontab-ui
hostname: crontabui
environment:
TZ: ${TZ}
CRON_PATH: /var/spool/cron/crontabs
CRON_DB_PATH: /opt/docker/configs/Crontab-UI/data
ports:
- 8000
volumes:
- /opt/docker/configs/Crontab-UI/supervisord.conf:/etc/supervisord.conf:ro
# Must be identical in host OS
- /var/spool/cron/crontabs:/var/spool/cron/crontabs:rw
- /opt/docker/configs/Crontab-UI/data:/mnt/systems/crontabui/data:rw
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.labels.MainDaemon == true
restart_policy:
condition: any
networks:
- ocm
networks:
ocm:
external: true
name: misc
supervisord (note: I added user=root
as the log suggested)::
[supervisord]
nodaemon=true
user=root
#[program:crontab]
#command=crond -l 2 -f -c %(ENV_CRON_PATH)s
#stderr_logfile = /var/log/crontab-stderr.log
#stdout_logfile = /var/log/crontab-stdout.log
[program:crontabui]
command=node /crontab-ui/app.js
stderr_logfile = /var/log/crontabui-stderr.log
stdout_logfile = /var/log/crontabui-stdout.log
Permissions:
I'm running Docker Desktop in WSL2. Do you see anything out of place? This is beyond my Linux abilities.
Does docker desktop support docker-compose 3.x versions?
I run 2.x for all my stuff, as that was the recommended version for non-clustered setups. 3.x brings a lot of deploy/cluster logic.
To start with, I would ensure that your container even runs. So comment out all volumes, and the two CRON* environment variables. See if it will run then. You could also completely clear it out (docker-compose rm -sf
+ removing any stray volumes) to ensure that nothing from previous runs affects current runs.
Once it can run and you can access it, gradually introduce variables and volumes, and see when it stops working.
I tried the above docker-compose from @LordMike, but have not had any success. I think it may be because the container runs as root, but my cron jobs do not. I have not figured out a way around this. I set the perm to 777 for the users cron job file and that did not work. I cannot specify the user in the supervisord.conf file as it does not exist in the container. I also tried setting the user to 1000:1000 in docker compose using user: directive, but then the container does not start due to lack of permissions to container folders.
There does not seem to be way to manage non root cron jobs using the docker container unless I am missing something?
How about some creative volume mappings?
Like mounting /var/spool/cron/crontabs/myuser
(host) to /var/spool/cron/crontabs/root
(container) ?
@LordMike That did the trick. Thanks!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.