cms icon indicating copy to clipboard operation
cms copied to clipboard

Pruning extra revisions fail [3.7.45]:

Open DavidKabelitz opened this issue 3 years ago • 10 comments

What happened?

Hi, we have about 300 errors a day at the queue manager. Always related to different entries but always the same error message.

Not sure what the problem is.

Thanks David

Bildschirmfoto 2022-06-27 um 13 40 04 0

Craft CMS version

3.7.45

PHP version

No response

Operating system and version

No response

Database type and version

No response

Image driver and version

No response

Installed plugins and versions

Plugins in use: Cloudflare 1.1

DavidKabelitz avatar Jun 27 '22 11:06 DavidKabelitz

Try searching through storage/logs/queue.log (and any other queue.log.X files) for that error. Does it show up? If so please post the full stack trace that follows it.

brandonkelly avatar Jun 27 '22 19:06 brandonkelly

Yes it's there:

2022-06-27 17:48:31 [-][-][-][info][craft\queue\QueueLogBehavior::beforeExec] [1976457] Pruning extra revisions (attempt: 1, pid: 21062) - Started 2022-06-27 17:48:34 [-][-][-][error][craft\queue\QueueLogBehavior::afterError] [1976457] Pruning extra revisions (attempt: 1, pid: 21062) - Error (time: 3.084s): Unable to acquire a lock for file "/var/www/craft/storage/logs/cloudflare.log".

DavidKabelitz avatar Jun 28 '22 13:06 DavidKabelitz

or do you need the whole file? It's 8MB big

DavidKabelitz avatar Jun 28 '22 13:06 DavidKabelitz

I just need a stack trace. Is there one that follows the log you posted? If not try searching for other instances of the error message.

brandonkelly avatar Jun 28 '22 15:06 brandonkelly

it's always just like that: 2022-06-26 12:13:19 [-][-][-][info][craft\queue\QueueLogBehavior::beforeExec] [1971338] Pruning extra revisions (attempt: 1, pid: 15663) - Started 2022-06-26 12:13:22 [-][-][-][error][craft\queue\QueueLogBehavior::afterError] [1971338] Pruning extra revisions (attempt: 1, pid: 15663) - Error (time: 3.083s): Unable to acquire a lock for file "/var/www/craft/storage/logs/cloudflare.log".

DavidKabelitz avatar Jun 28 '22 15:06 DavidKabelitz

just to inform you...after restarting all these errors again it works and they are gone. So looks like a timing issue ...

DavidKabelitz avatar Jun 28 '22 15:06 DavidKabelitz

@DavidKabelitz do you have multiple queue runners processing the queue or just one? Is this in a load-balanced environment?

angrybrad avatar Jun 28 '22 17:06 angrybrad

@angrybrad yes we are running on kubernetes with multiple pods and also a load balancer

DavidKabelitz avatar Jun 29 '22 12:06 DavidKabelitz

@DavidKabelitz How are the queue runner(s) set up? Using Craft's default web-based queue runner, so each pod ends up being a queue runner? Or is there a dedicated pod just for deamonizing craft to process the queue?

angrybrad avatar Jun 29 '22 19:06 angrybrad

Yes it's the default web based queue runner. In that case we solved it and removed the queue runner cause we already had cron jobs running with schedules.

su www-data -c "/usr/local/bin/php /var/www/craft/craft queue/run"

So for a better / clean setup a dedicated pod for the queue would be the solution?

DavidKabelitz avatar Jun 30 '22 16:06 DavidKabelitz