laravel-aws-worker icon indicating copy to clipboard operation
laravel-aws-worker copied to clipboard

Laravel Job `failed` and `Queue::failing()` support?

Open zellkz opened this issue 7 years ago • 8 comments
trafficstars

How does this package handle Laravel Job's failed() method and Queue::failing()? https://laravel.com/docs/5.5/queues#dealing-with-failed-jobs

Is this not supported? Do I need to create my own failed handling, and not use Laravel's built in failed logic?

zellkz avatar Sep 17 '18 20:09 zellkz

Any update on this? Wondering about the same thing - can the jobs be sent into Laravel's failed jobs instead of moved to "dead" queue on AWS?

aromka avatar Oct 02 '19 19:10 aromka

The README should also be updated since there is no FailedJobException and it was removed at some point (because it was unused).

d3radicated avatar Mar 18 '20 01:03 d3radicated

When I used auto-generated queues by Beanstalk: I add the dead letter SQS queue created and failed job were pushed there. But, when I use an already created SQS queue: I don't have any dead letter queue. So I can't debug failed job: I only get "Job timeout stuff" in my laravel logs...

So the only way to manage this, with custom SQS queues, is to deal ourselves with a custom failed process/business on each of our jobs?

Thanks

mtx-z avatar May 06 '20 14:05 mtx-z

@mtx-z this is controlled by AWS EB, not this package? AWS daemon will hit our endpoint with the payload and if it doesn't get a valid (200) response, it's going to move the job to the dead queue. there is no way we can control this, I think. cannot you just manually select a dead letter queue?

dusterio avatar May 06 '20 23:05 dusterio

@zellkz @aromka failed jobs are already implemented in the Elastic Beanstalk, if a request returns a non-ok (200) code, EB is going to put the job in the dead letter queue

Moving jobs to local, Laravel-specific dead queue sounds a bit like a hack? Wouldn't it be better to make "php artisan retry" work with Amazon's dead letter queue instead and fully utilise what they are providing? Dead letter queue is just an analog of Laravel's "failed_jobs" table after all

dusterio avatar May 06 '20 23:05 dusterio

@dusterio with more tests, I was able to:

  • use a custom SQS queue 1 (not generated with Beanstalk)
  • create another "deadLetter" queue 2, manually so
  • configure queue 1 DeadLetter Settings to push message to the queue 2 on fail

But with this configuration, I'm not able to start a php artisan queue:failed to try to work on failed message, cause it's SQS that post us messages. So, I would need to set my queue 2 to POST to the same endpoint on my application. But in this case, it'll immediately fail (message to queue 1, fail, push to queue 2, that POST to our app, fail again for the same error, theoretically). What I would need is:

  • Be able to retry the failed job, when I want
  • And so: being able to decide when DeadLetter (2) queue will post jobs to my application

When working with the classic failed_job table, I decide when laravel should retry the failed job: when I fixed the issue ^^.

Maybe I'm missing something? Thx

mtx-z avatar May 07 '20 11:05 mtx-z

Any update or suggestions on this?

canast02 avatar Nov 04 '21 10:11 canast02

Any updates or suggestions?

un-code-dev avatar Nov 12 '21 15:11 un-code-dev

Any updates or suggestions?

mcandylab avatar Jun 18 '23 17:06 mcandylab

I would suggest switching to Laravel Horizon + Redis (ElastiCache). You can actually monitor what's going on with your queues, don't need separate servers (just scale your AWS fleet for regular servers if needed), and don't need any third party libs. Works like magic on AWS EB with no issues, and if you're already using Redis for caching makes it even easier to switch.

aromka avatar Jun 18 '23 22:06 aromka