cms
cms copied to clipboard
Pruning extra revisions fail [3.7.45]:
What happened?
Hi, we have about 300 errors a day at the queue manager. Always related to different entries but always the same error message.
Not sure what the problem is.
Thanks David
0
Craft CMS version
3.7.45
PHP version
No response
Operating system and version
No response
Database type and version
No response
Image driver and version
No response
Installed plugins and versions
Plugins in use: Cloudflare 1.1
Try searching through storage/logs/queue.log (and any other queue.log.X files) for that error. Does it show up? If so please post the full stack trace that follows it.
Yes it's there:
2022-06-27 17:48:31 [-][-][-][info][craft\queue\QueueLogBehavior::beforeExec] [1976457] Pruning extra revisions (attempt: 1, pid: 21062) - Started 2022-06-27 17:48:34 [-][-][-][error][craft\queue\QueueLogBehavior::afterError] [1976457] Pruning extra revisions (attempt: 1, pid: 21062) - Error (time: 3.084s): Unable to acquire a lock for file "/var/www/craft/storage/logs/cloudflare.log".
or do you need the whole file? It's 8MB big
I just need a stack trace. Is there one that follows the log you posted? If not try searching for other instances of the error message.
it's always just like that:
2022-06-26 12:13:19 [-][-][-][info][craft\queue\QueueLogBehavior::beforeExec] [1971338] Pruning extra revisions (attempt: 1, pid: 15663) - Started 2022-06-26 12:13:22 [-][-][-][error][craft\queue\QueueLogBehavior::afterError] [1971338] Pruning extra revisions (attempt: 1, pid: 15663) - Error (time: 3.083s): Unable to acquire a lock for file "/var/www/craft/storage/logs/cloudflare.log".
just to inform you...after restarting all these errors again it works and they are gone. So looks like a timing issue ...
@DavidKabelitz do you have multiple queue runners processing the queue or just one? Is this in a load-balanced environment?
@angrybrad yes we are running on kubernetes with multiple pods and also a load balancer
@DavidKabelitz How are the queue runner(s) set up? Using Craft's default web-based queue runner, so each pod ends up being a queue runner? Or is there a dedicated pod just for deamonizing craft to process the queue?
Yes it's the default web based queue runner. In that case we solved it and removed the queue runner cause we already had cron jobs running with schedules.
su www-data -c "/usr/local/bin/php /var/www/craft/craft queue/run"
So for a better / clean setup a dedicated pod for the queue would be the solution?