Handling exceptions caused when sending exceptions to Sentry
We had a serious pileup of exceptions today. Sentry started returning 429 (too many requests) errors, which of course triggered exceptions in Guzzle, which were then put in the queue for Raven to send to Sentry, with the original exception's data stored inside. When these failed to send with more 429 errors, the new 429 is queued with the old 429 stored inside. This snowballed until Sentry started returning 413 (request too large) errors, which were in turn put in the queue, getting exponentially larger until the Redis store ran out of space, causing yet more errors.
I don't know whether this would be considered a bug in Raven or not, but it seems to me that it would be best to ignore (perhaps specific, perhaps all) client errors which come from attempting to send data to Sentry.
Do you have any suggestions on how I can do this?
This can be solved by adding the job back into the queue with a delay. This way the errors eventually are added into sentry. #17 does add this with queue jobs.
I wanted to be a littler smarter, you can quickly get an loop that just fills the queue. But it's good enough for now.
Looking at ravens own client, it seems it simply discards the error if sentry returns an error from the api.
https://github.com/getsentry/raven-php/blob/master/lib/Raven/Client.php#L620 https://github.com/getsentry/raven-php/blob/master/bin/raven#L66
I took the relevant bit from your pull request, @pulkitjalan, and put it here https://github.com/rcrowe/Raven/pull/23 -- whatever is holding your pull request back hopefully isn't also holding this segment back.
+1 experienced the exact same problem as trembly is explaining. Please accept his PR.
+1 I've also hit the same issue.
+1 same issue
I have zero idea why I didn't merge this :cry: I'll sort it
Sorry, any update on this? I am still using the fork from tremby.