backburner icon indicating copy to clipboard operation
backburner copied to clipboard

Reserving job fails when haproxy client timeout is exceeded

Open lashd opened this issue 12 years ago • 0 comments

Hi I am looking in to setting up beanstalkd and backburned in a HA context.

To protect against a beanstalkd instance going down I have multiple instances fronted by haproxy.

My test is using the Simple worker.

Whilst executing the work_one_job method in worker.rb, backburner attempts to reserve a job without using a timeout. When there are no jobs on a particular tube, the connection is kept open until one arrives.

When connecting to beanstalkd directly, the connection is held open for as long as it takes for a job to arrive. However haproxy terminates the connection, after the timeout that it is configured with is exceeded.

At this point the exception is caught but a further exception is thrown as the job variable is nil. The fix to this problem is unfortunately not a one liner as the tcpsocket connection held in a class variable is also broken so subsequent retries would also fail.

What do you think the solution should be to make connections to beanstalkd more resilient, when operating in the this context? Should these terminations be handled? I would like to contribute to the project, but would like to know your thoughts before I spend to much time going in the wrong direction.

Thanks for your help, lashd

Backburner looks great by the way.

lashd avatar Jan 26 '13 23:01 lashd