django-q2 icon indicating copy to clipboard operation
django-q2 copied to clipboard

High CPU Usage

Open JoeHitchen opened this issue 1 year ago • 6 comments

I have a fairly standard Django-Q configuration (I believe) but I am running into prohibitively high CPU consumption.

Running a simple addition task scheduled every minute, I'm seeing CPU usage about 6x large than I get for a Celery set up running the same task schedule (celery worker, celery beat w/ django-celery-beat, redis broker). I'm keen to use Django-Q since I have already worked with Celery and want to learn new tools, but my t2.micro can't handle the difference in CPU load.

My configuration is:

Q_CLUSTER = {
    'orm': 'default',
    'timeout': 45,
    'catch_up': False,
}

Are there any settings I should be tweaking to improve the CPU usage?

JoeHitchen avatar Mar 12 '23 14:03 JoeHitchen

Strange, never had this issue.

Are the spikes consistent? In other words, does this happen right from the start or only after a few hours/days? Can you also check what your ram usage looks like (mostly curious if you are maxing that out)?

GDay avatar Mar 12 '23 14:03 GDay

It seems to just be baseline usage, and I've observed it on two different machines. Monitoring with docker stats, most of the time it's around 3% compared to maybe 0.5% for the other stack. There doesn't seem to be much variability - It's mostly steady with the occasional dips, but even then rarely below 1.5%.

Memory usage is comparable between the two and not up against machine limits on either (I believe, haven't tested on the second today).

Pulling up the CPU history for the server, you can see the jump in usage from the old baseline when I introduced Django-Q and that it's hitting the steady-state usage maximum. I then performed an emergency switch to Celery because I was expecting some website loading, and since then usage has mostly returned to it's original levels, despite the extra Celery stack. CPU-Usage

I have pushed the image I'm using for the testing right now to joehitchen/fantasy-bumps:tasks

JoeHitchen avatar Mar 12 '23 17:03 JoeHitchen

Any suggestions for what I should try to bring it down?

JoeHitchen avatar Mar 18 '23 19:03 JoeHitchen

If you don't have any strong task delay constraint you can increase the poll setting (default 200ms) to reduce the idle load. You can also increase guard_cycle (default 500ms), but impact will most likely before more limited.

For instance in our project we are using a poll value of 30 seconds.

msabatier avatar Mar 18 '23 21:03 msabatier

Thank you! I have monitoring tasks I'd want to run roughly every minute but I certainly don't need polling every 0.2s. I'll give these a go when I get the chance and report back.

JoeHitchen avatar Mar 20 '23 12:03 JoeHitchen

Nice solution @msabatier !

I had the same issue with cpu usage and a poll value of 30 seconds (more than enough for my use case) solved it.

Screenshot 2023-07-13 at 19 15 18

MiKatre avatar Jul 13 '23 17:07 MiKatre