django-q2
django-q2 copied to clipboard
how to run django app and qcluster in separate docker containers
Hi, I'm using docker-compose to containerize two services, the main app service to queue tasks and the qcluster service to create qcluter. I'm using sqlite orm as a the borker. It is shared between the two services.
Docker-compose file:
version: '3.8'
services:
appseed-app:
container_name: appseed_app
restart: always
env_file: .env
build: .
volumes:
- dbdata:/app
networks:
- db_network
- web_network
qcluster:
container_name: qcluster
restart: always
env_file: .env
command: python manage.py qcluster
build: .
volumes:
- dbdata:/app
networks:
- db_network
- web_network
nginx:
container_name: nginx
restart: always
image: "nginx:latest"
ports:
- "5085:5085"
volumes:
- ./nginx:/etc/nginx/conf.d
networks:
- web_network
depends_on:
- appseed-app
networks:
db_network:
driver: bridge
web_network:
driver: bridge
volumes:
dbdata: {}
setting.py file:
Q_CLUSTER = {
'name': 'mycluster',
'workers': 4,
'recycle': 50, # The number of tasks a worker will process before recycling . Useful to release memory resources on a regular basis. Defaults to 500.
'timeout': 1000, #The number of seconds a worker is allowed to spend on a task before it’s terminated
'retry': 1200, # The value must be bigger than the time it takes to complete longest task, i.e. timeout must be less than retry value and all tasks must complete in less time than the selected retry time
'queue_limit': 50,
'label': 'Django Q2',
'catch up': False,
'bulk': 10,
'orm': 'default',
'sync': False,
'ack_failures': True,
'ack_warning': False,
'orm_check': True,
}
# cache for django-q --> python manage.py createcachetable
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
'LOCATION': 'my_cache_table',
}
}
The Problem: It seems that even though the cluster is setup in the qcluster container, It's not accessible/visible in the main app service. Not sure what I'm missing.
Thanks for you patience.
@AnanyaP-WDW That's correct. Django Q2 runs isolated from the main app (except for the connection to the database). I am not sure what you want to access here.
@GDay Thanks for the response. It seems that the main app cannot access the created cluster ie. When i create a task in the main app, i can see the created task but not the cluster. When i access the cluster container i can see the created cluster but not the task. Tldr: how can I send tasks to the cluster container?
When you add new tasks, the cluster will pick them up by itself. You don't need to do anything for that. So I am not sure what the issue is here. Can you share how you create tasks?
This is how i'm creating tasks. This is inside a view
async_task(run, var_a, var_b, group = job_id, hook = q_result_hook )
The thing is when I run the main app without docker the q works. But as soon as I make separate containers the q refuses to pick tasks.
Some more information:
- I've checked network between the containers.
- I've checked that both services have access to the db.
- There is a cache table in the db
- A valid cluster gets created in the qcluster container
- A valid task is queued in the main app cluster
I'm exploring using redis instead of sqlite or using a process manager like supervisord
@AnanyaP-WDW SQLite uses a local file so two containers will either not use the same database but two disjoint copies, or you use Docker volumes to make them share that file and then you rely on SQLite doing proper concurrent access to the database file. Would both not be favorable to me; I'd personally use PostgreSQL instead without much overhead in complexity, have one dedicated container for PostgreSQL and have the other containers talk to that single PostgreSQL. Does that make sense? PS: It's not with Django Q2 but I'm applying that very idea at both https://github.com/hartwork/wnpp.debian.net/ and https://github.com/hartwork/jawanndenn .