heroku-buildpack-php-tyler
heroku-buildpack-php-tyler copied to clipboard
R12 (Exit timeout) - in newrelic/3.1 branch
Looks like nginx is not exiting and therefore there comes an error:
2013-02-21T23:13:39+00:00 heroku[web.1]: Error R12 (Exit timeout) -> At least one process failed to exit within 10 seconds of SIGTERM
2013-02-21T23:13:39+00:00 heroku[web.1]: Stopping remaining processes with SIGKILL
2013-02-21T23:14:32+00:00 heroku[web.1]: Process exited with status 137
2013-02-21T23:14:33+00:00 heroku[web.1]: State changed from up to down
2013-02-21T23:15:20+00:00 heroku[router]: at=error code=H21 desc="Backend connection refused" method=GET path=/ host=*********de fwd="************" dyno=web.1 queue=0ms wait=3ms connect=4ms service= status=503 bytes=
...
2013-02-21T23:19:23+00:00 heroku[router]: at=error code=H21 desc="Backend connection refused" method=GET path=/ping ...
(later)
2013-02-21T23:22:33+00:00 heroku[web.1]: State changed from up to down
2013-02-21T23:22:33+00:00 heroku[web.1]: State changed from up to down
2013-02-21T23:22:33+00:00 heroku[web.1]: State changed from down to starting
2013-02-21T23:23:44+00:00 heroku[router]: at=error code=H20 desc="App boot timeout" method=GET path=/ping...
I had to "heroku ps:restart" the app by hand to solve the problem ;-( Will have an eye on this issue - maybe it's something more seriously cause I recently changed to this buildpack and never had issues like that before with the default php/apache buildpack.
What happened that nginx had to shutdown?
The dyno is not used very much. NewRelic was sending it's pings. So I don't think the (single) dyno was idling.
My first idea was that the dyno was cycled: "Dynos are cycled at least once per day, or whenever the dyno manifold detects a fault in the underlying hardware and needs to move your dyno to a new physical location. Both of these things happen transparently and automatically on a regular basis and are logged to your application logs."
But the last cycle logged was hours before the R12 2013-02-21 01:02:24+00:00 heroku web.1 - - Cycling and just before the R12 was no "Cycling" event in the log.
Before the fatefulll "Error R12" I saw this:
2013-02-21 23:12:35+00:00 heroku router - - at=info method=GET path=/ping host=****** fwd="*********us-west-2.compute.amazonaws.com, **********/NX" dyno=web.1 queue=0 wait=0ms connect=1ms service=2ms status=200 bytes=14 Context
2013-02-21 23:12:36+00:00 app web.1 - - GET /ping 200 - 0.574 512
2013-02-21 23:12:43+00:00 heroku web.1 - - State changed from up to down
2013-02-21 23:12:46+00:00 heroku web.1 - - State changed from up to down
I don't know what causes the "State changed from up to down".
A more complete log can be found in this secret gist: https://gist.github.com/larsschenk/9f1edd60c6ae9d6a0e62 (will be deleted in a few hours)
I've tested this a couple of times and can only reproduce it on the newrelic/3.1 branch. Does the same issue happen in the develop branch too?
I had the problem on two different dynos. I thought it actually was the develop branch - but I did not have this problem returning for a couple of days. So I'm not sure if it really was the development branch - maybe I had changed back from newrelic/3.1 to development but the slug compiler did not start to run. (resulting in still having the newrelic/3.1 branch instead of the development branch running)
Just to be clear, changing buildpack branch is more than just adjusting the BUILDPACK_URL. Just changing the variable alone does nothing, but to restart the app.
For the slug compiler to kick-in, it requires 1) the BUILDPACK_URL to be updated; 2) the app git repo to be updated (with new commits, i.e. HEAD pointing to a different commit); and 3) such updates pushed into the heroku app's master branch.
That being said, I shall give the develop branch a few more tests with a more complex PHP app over the weekend, but I don't foresee any problems.