laravel-aws-worker icon indicating copy to clipboard operation
laravel-aws-worker copied to clipboard

Memory Issue?

Open ackerchez opened this issue 6 years ago • 3 comments

Hey All, When I install this plugin on my Laravel 5.5 AWS Elastic Beanstalk worker I see that even for the simplest job hitting the queue it is maxing out 100% of the memory on my worker environment. Does anyone know why that would happen?

When I say the simplest job, I mean a job that get dispatched through SQS and uses the worker to log int he DB that it was touched.

Thanks!

ackerchez avatar Feb 27 '18 11:02 ackerchez

have you try to destroy all variable in the end of your jobs? Simply just unset it :

unset($this->yourVarX);
unset($this->yourVarY);

tajhulfaijin avatar Apr 16 '18 09:04 tajhulfaijin

Hello,

I have a similar issue with my beanstalk worker. I tried many times to pinpoint any issue in my code, our business logic, the queries, etc... But I can't figure out why my worker always goes 100% of memory.

  • After a server reboot, everything goes fine for half-hour to a few days. But at a point, my worker goes 100% of memory, and the server crash "silently" as no tasks are processed anymore, and SSH starts lagging until not responding. I have to restart the EC2 instance fully.
  • Strange thing is that, from start to "crash", tasks sent to SQS are the same. Just on a different row of our DB, but still the same logic.
  • At the start, I can send hundreds of those tasks, and the worker will process them correctly, using 20-30% max.

Here is the htop upon crash: image

  • we see FOREGROUND processes eating up all the memory. The CPUs are low, as I can see in SQS that no tasks are being processed

image

  • strangely, SQS says "6" messages are being processed. But from my app log (those process logs on every row processed for debugging purposes), nothing is happening in reality. I suspect hanging jobs to be part of the issue, but why would they eat up all the memory, and why would they work correctly after a server reboot, for a time at least.

image

  • I also don't know why there are more than 4 FOREGROUND processes as my configuration is 4 HTTP connections max

I properly unset variables from all my queued jobs, as you advised, without any effect on the issue, unfortunately.

Do you have any idea of what I could try or test to pinpoint the issue?

Thanks a lot.

mtx-z avatar Jan 24 '22 16:01 mtx-z

My issue was not related to SQS or this package. Memory consumption was induced by a Laravel Cloudwatch log driver with a large default batch size (see https://github.com/maxbanton/cwh/issues/7#issuecomment-1025236597), filing the memory up on long-running processes.

mtx-z avatar Jan 31 '22 18:01 mtx-z

Marking as stale. If any further issues arise, let us know (open a new issue) if there are package-specific issues that are pointing to these problems.

fylzero avatar Jul 09 '23 04:07 fylzero