drupal-with-nginx
drupal-with-nginx copied to clipboard
php-fpm crashing every few day
Good Morning,
I understand this is probably not related to this config, however you seem to be a wealthy source of information on nginx / php-fpm and drupal. Perhaps you have seen this problem before...
Every few days at exactly the same times 08:04, 20:04 (approx.) The php-fpm children of all of my web servers increase dramatically until the max_children is reached, the servers then come under heavy load and syslog show kernel messages related to oomkiller. php-fpm then appears to crash/restart all processes and the site then loads fine again for another few days.
This cycle repeats every few days at the same time, it also occurs whether I set max_children to 10, 50, 100 etc. what ever the value, they spawn more until php-fpm crashes, some kind of memory leak or infinite loop?
Site traffic is also minimal at these times and the site can handle 200% more traffic at other peak times without problems.
Versions: Ubuntu 11.10 - 3.0.0-12-server x86_64 nginx 1.0.5-1 php 5.3 5.3.6-13ubuntu3.3 drupal6
Any ideas would be well appreciated.
Regards,
Alun.
OOM killer is a kernel process that recovers memory that is thought to be the result of a runout process.
Check this: http://lwn.net/Articles/317814/
This could be a php-fpm config issue or a bug.
Try to edit your upstream and remove keepalive. Edit your nginx servers and comment fastcgi_keep_conn.
This fixed it for me.
You might want to look here. This is probably due to the high number of cached connections for the FCGI upstream keepalive. Lower the value to 1 and see how it goes.
I have checked my nginx setup and fastcgi_keep_conn is hashed out :(
#fastcgi_keep_conn on;
Would it still be worth changing #keepalive 5; to keepalive 1; ??
I think that there's something in your site that triggers that. Have you checked the logs (php-fpm logs I mean)?
Hi there, thanks again for helping me on the other issue. I believe I'm having the same problem. I have one pool and have my keepalive set to 1. I can run your config with no issues running on port 80, but if I start Varnish and have Nginx listen on a backport, it starts spawning a ton of pool processes. I changed the keepalive to 1 and commented out fastcgi_keep_conn, but it still spawns a lot of pools and starts to give me errors of connection reset by peers.
Any experience with Varnish in front of your configuration? Is it even worth it to have Varnish?
Intresting! I am also using Varnish in front of Nginx!
Are you having the same issue?
Sent from my iPhone
On Sep 7, 2012, at 2:51 AM, paradoxni [email protected] wrote:
Intresting! I am also using Varnish in front of Nginx!
— Reply to this email directly or view it on GitHub.
I can't help you there. What I can help you is to get Nginx working as a LB/reverse proxy replacing Varnish, with advantages IMHO. Unless you use very abstruse stuff like compressed ESI.
I certainly would appreciate that. How would this setup have advantages over varnish IYHO?
On Sep 7, 2012, at 10:42 AM, António P. P. Almeida [email protected] wrote:
I can't help you there. What I can help you is to get Nginx working as a LB/reverse proxy replacing Varnish, with advantages IMHO. Unless you use very abstruse stuff like compressed ESI.
— Reply to this email directly or view it on GitHub.
I would also be very interested to learn.
Ok. Post your VCL file somewhere, so that I can suggest a reverse proxy config for Nginx.
Here's my default.vcl contents:
backend default { .host = "127.0.0.1"; .port = "8001"; .connect_timeout = 600s; .first_byte_timeout = 300s; .between_bytes_timeout = 10s; }
sub vcl_recv { if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ return (pipe); }
if (req.http.X-Forwarded-For) { // Append the client IP set req.http.X-Real-Forwarded-For = req.http.X-Forwarded-For + ", " + regsub(client.ip, ":.", ""); unset req.http.X-Forwarded-For; } else { // Simply use the client IP set req.http.X-Real-Forwarded-For = regsub(client.ip, ":.", ""); }
if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ return (pass); }
// Remove has_js and Google Analytics cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s_)(__[a-z]+|__utma_a2a|has_js)=[^;]_", "");
// To users: if you have additional cookies being set by your system (e.g. // from a javascript analytics file or similar) you will need to add VCL // at this point to strip these cookies from the req object, otherwise // Varnish will not cache the response. This is safe for cookies that your // backend (Drupal) doesn't process. // // Again, the common example is an analytics or other Javascript add-on. // You should do this here, before the other cookie stuff, or by adding // to the regular-expression above.
if (req.url ~ ".(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|js|css)??.?$") { unset req.http.Cookie; }
// Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s_", ""); // Remove empty cookies. if (req.http.Cookie ~ "^\s_$") { unset req.http.Cookie; }
if (req.http.Authorization || req.http.Cookie) { /* Not cacheable by default */ return (pass); }
// Skip the Varnish cache for install, update, and cron if (req.url ~ "install.php|update.php|cron.php|sitemap.xml|robots.txt|phpmyadmin") { return (pass); }
// Normalize the Accept-Encoding header // as per: http://varnish-cache.org/wiki/FAQ/Compression if (req.http.Accept-Encoding) { if (req.url ~ ".(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { # No point in compressing these remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else { # Unknown or deflate algorithm remove req.http.Accept-Encoding; } }
// Let's have a little grace set req.grace = 10s;
return (lookup); }
sub vcl_hash {
if (req.http.Cookie) { hash_data(req.http.Cookie); } }
// Strip any cookies before an image/js/css is inserted into cache. sub vcl_fetch {
if (req.url ~ ".(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|js|css)??.?$") { set beresp.ttl = 7200s; set beresp.grace = 10m; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; unset beresp.http.set-cookie; }
}
sub vcl_deliver { if(obj.hits > 0){ set resp.http.X-Cache = "HIT"; set resp.http.X-Cache-Hits = obj.hits; } else { set resp.http.X-Cache = "MISS"; }
return (deliver);
}
sub vcl_error { // Let's deliver a friendlier error page. // You can customize this as you wish. set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {"
Page Could Not Be Loaded
We're very sorry, but the page could not be loaded properly. This should be fixed very soon, and we apologize for any inconvenience.
Debug Info:
Status: "} + obj.status + {" Response: "} + obj.response + {" XID: "} + req.xid + {"
I'm a hardly a Varnish connaisseur. But most of this stuff is already done by the Nginx config (80% out of the box):
- No compression of images.
- HEAD and GET are the only allowed methods in proxy_cache.
- Sending the real IP in a header.
You're caching static assets for 2h (7200s) right?
Yes, due to being a development box. On production it would need max settings.
Does nginx store static files in ram like Varnish?
Sent from my iPhone
On Sep 11, 2012, at 7:09 AM, António P. P. Almeida [email protected] wrote:
I'm a hardly a Varnish connaisseur. But most of this stuff is already done by Nginx config:
No compression of images.
HEAD and GET are the only allowed methods in proxy_cache.
Sending the real IP. You're caching static assets for 2h (7200s) right?
— Reply to this email directly or view it on GitHub.
No, you don't need to. It's quite optimized in terms of IO. If you want it you can put it on a tmpfs
partition.
BTW, are you receptive to testing the two setups and benchmark each approach?
I may be able to test also, varnish config is very similar to mine, thou we are using varnish for load-balancing the web servers too, but I am sure its easy enough to do with nginx also.
Absolutely.
Sent from my iPhone
On Sep 11, 2012, at 7:30 AM, António P. P. Almeida [email protected] wrote:
BTW, are you receptive to testing the two setups and benchmark each approach?
— Reply to this email directly or view it on GitHub.
Ok. This evening (I'm on CET ) I'll create a branch varnish-cracker
and push the first attempt at getting it right.
Perfect.
Sent from my iPhone
On Sep 11, 2012, at 8:18 AM, "António P. P. Almeida" <[email protected]mailto:[email protected]> wrote:
Ok. This evening (I'm on CET ) I'll create a branch varnish-cracker and push the first attempt at getting it right.
— Reply to this email directly or view it on GitHubhttps://github.com/perusio/drupal-with-nginx/issues/55#issuecomment-8457644.
This will be extremely interesting. I have a project that I will be migrating to a new server, and if this works out, I will gladly replace Varnish.
I will be setting up a production environment soon and how this works out will dictate how I'll go forward as well.
On Sep 11, 2012, at 10:37 AM, Felipe Fidelix wrote:
This will be extremely interesting. I have a project that I will be migrating to a new server, and if this works out, I will gladly replace Varnish.
— Reply to this email directly or view it on GitHubhttps://github.com/perusio/drupal-with-nginx/issues/55#issuecomment-8461904.
Ok the first commit is in. I have to test it. Usually I set up different upstreams for videos, css/js, images, etc. This is a simplification. It's the load balancer (or mere reverse proxy if you have a single upstream) that caches all the stuff. For the moment there's caching of CSS/JS and images.
Missing is testing and setting up the real IP header as well verifying all other headers. Note that now the server that forwards to fpm runs binded to the loopback as a security measure. It runs on port 8081. If you're not load balancing just delete all superfluous upstreams in backends_web.conf
.
Here's the branch: https://github.com/perusio/drupal-with-nginx/tree/varnish-cracker
Hopefully I'll finish it tomorrow. You can try it if you feel adventurous. Take a peek on it to see where things are going.
A little bit more patience :( I hope to a have a mo' better Varnish without Varnish by the weekend. Let's see.
Much appreciated!!
Sent from my iPhone
On Sep 12, 2012, at 7:57 PM, "António P. P. Almeida" <[email protected]mailto:[email protected]> wrote:
A little bit more patience :( I hope to a have a mo' better Varnish without Varnish by the weekend. Let's see.
— Reply to this email directly or view it on GitHubhttps://github.com/perusio/drupal-with-nginx/issues/55#issuecomment-8514285.
It's in the first working version. There are several ways to do "Varnish" in Nginx. I've choosen to front a site with a proxy that intercepts all static calls and uses a cache. It also uses a proxy cache, thus allowing for two levels of caching. You can just comment out the include sites-available/microcache_long_proxy.conf;
line on the /
and /imagecache/
locations if you don't want it. In fact in that case it's simpler to run a dedicated server just for static assets to which the main server proxies to. The setup now available it's mostly useful to people that need the load balancing part. No touching/obeying of headers from the upstream is in place. Do tel me if you need such a thing.
Note that the logic of having a proxy cache + fastcgi cache is to chain the caches, this is most useful in setups with load balancing. One longer cache on the proxy and a shorter one on the fastcgi.
proxy_cache(T) -> fastcgi_cache(t) where T > t being the respective cache validities.
Try it out and report back.
BTW I've added ETag support. It requires Nginx >= 1.3.3.
@Fidelix @paradoxni @hmoen @moehac @priyadarshan Anyone ventured with this? I.e., using Nginx in lieu of Varnish? Feedback is needed to progress :) Thanks.
I will be testing this weekend. Trying to put out fires on a project :)
Sent from my iPhone
On Sep 21, 2012, at 6:55 AM, António P. P. Almeida [email protected] wrote:
@Fidelix @paradoxni @hmoen @moehac @priyadarshan Anyone ventured with this? I.e., using Nginx in lieu of Varnish? Feedback is needed to progress :) Thanks.
— Reply to this email directly or view it on GitHub.