nginx-proxy-manager
nginx-proxy-manager copied to clipboard
413 Request Entity Too Large
Checklist
-
Have you pulled and found the error with
jc21/nginx-proxy-manager:latestdocker image? Yes -
Are you sure you're not using someone else's docker image? Yes
-
If having problems with Lets Encrypt, have you made absolutely sure your site is accessible from outside of your network? Yes, but not related
Describe the bug
File uploads trigger 413 Request Entity Too Large
Setting the advanced config client_max_body_size 100m; takes no effect and the problem persists
To Reproduce Steps to reproduce the behavior:
- Upload a larger file on a website going thru the proxy, 413
- Set an appropriate
client_max_body_size - try to upload again
- See a 413
Expected behavior
client_max_body_size would take effect and allow the upload as it does with regular nginx
Screenshots If applicable, add screenshots to help explain your problem.
I'm having the same issue.
same issue when try to upload iso to proxmox
Hi, I have found a solution. you can enter client_max_body_size in the costome file server_proxy.conf, then it works. You can read here where and how to create the file.
At the Edit Proxy Host window, click Custom locations and click the gear button, set the following attributes to an appropriate value

Hi, I have found a solution. you can enter client_max_body_size in the costome file server_proxy.conf, then it works. You can read here where and how to create the file.
I went into NPM docker files, I added the server_proxy.conf file and added this line: "client_max_body_size 0;"
It still doesn't work. I also tried adding it to seafhttp like the picture above and that didn't work either. I still get 413 error.
I have the same issue
I have now tried this in all sorts of ways, and it never works. Nothing seems to change the NPM behavior. I thought it was something to configure on Seafile's side, but I tried it with different webdav servers and all of them will return 413. There is no way to fix this. Seems like there is something hardcoded in NPM.
I have now tried this in all sorts of ways, and it never works. Nothing seems to change the NPM behavior. I thought it was something to configure on Seafile's side, but I tried it with different webdav servers and all of them will return 413. There is no way to fix this. Seems like there is something hardcoded in NPM.
There’s a nothing to work with it. I should have to switch to Haproxy because I have a Client.
At the Edit Proxy Host window, click Custom locations and click the gear button, set the following attributes to an appropriate value
This not work at all.
Having the same issue.
i found out the issue. it's not nginx. it's cloudflare. they have 100mb limit. nothing you can do.
i found out the issue. it's not nginx. it's cloudflare. they have 100mb limit. nothing you can do.
I don’t have CloudFlare, I have different clients with they own hosting so it’s nginx ,
oh ok! then please let us know if you come up with anything. I've tried everything i could think of.
it works, when set the advanced tab

it works, when set the
advancedtab
What version do you have? because it doesn't work to me. I have the lastest one.
it works, when set the
advancedtabWhat version do you have? because it doesn't work to me. I have the lastest one.
I install it in docker, the docker version is jc21/nginx-proxy-manager:latest
it works, when set the
advancedtabWhat version do you have? because it doesn't work to me. I have the lastest one.
I install it in docker, the docker version is
jc21/nginx-proxy-manager:latest
Didn’t work to me.
None of above solutions worked for me.
app.use(BodyParser.json({ limit: '50mb' }))
app.use(BodyParser.urlencoded({ limit: '50mb', extended: true }));
app.use(BodyParser.json({ limit: '50mb' })) app.use(BodyParser.urlencoded({ limit: '50mb', extended: true }));
This solved my problem. Koa reported the same error, which led me to mistake it for nginx.
Could you please let know where I can do it .
```js app.use(BodyParser.json({ limit: '50mb' })) app.use(BodyParser.urlencoded({ limit: '50mb', extended: true }));This solved my problem. Koa reported the same error, which led me to mistake it for nginx.
Could you explain me the step to reproduce
```js app.use(BodyParser.json({ limit: '50mb' })) app.use(BodyParser.urlencoded({ limit: '50mb', extended: true }));This solved my problem. Koa reported the same error, which led me to mistake it for nginx.
Could you explain me the step to reproduce
import express from "express";
const app = express();
app.use(BodyParser.json({ limit: '50mb' }))
app.use(BodyParser.urlencoded({ limit: '50mb', extended: true }));
it works, when set the
advancedtab
This absolutely worked for me. Make sure you don't have any other servers between the client and your application (CF, another nginx server between NPManager and your app for hosting static files, etc.) and that your application itself is configured to allow large requests.
it works, when set the
advancedtabThis absolutely worked for me. Make sure you don't have any other servers between the client and your application (CF, another nginx server between NPManager and your app for hosting static files, etc.) and that your application itself is configured to allow large requests.
Worked for me. I realised I had another proxy server in front of NPM. So I updated the client_max_body_size for that server too
It works!
I pushed a giant (13GiB) docker container image to harbor via nginxproxymanager as a reverse proxy service. I did it.
This is something that have never worked to me
I have the same issue, setting client_max_body_size 0; in the advanced tab does not help. How can I debug this?
@jeffryjdelarosa @ymoona are you guys sure that there's nothing else between your client and the destination, including the destination itself, that would be blocking too large requests?
If you don't know, and don't have an easy way of telling, you can try to use the HTTP TRACE method combined with the Max-Forwards header to get requests from the intermediate servers - that of course depends on the intermediate servers honouring TRACE and properly decrementing/reflecting requests back based on the header value.
@jeffryjdelarosa @ymoona are you guys sure that there's nothing else between your client and the destination, including the destination itself, that would be blocking too large requests?
If you don't know, and don't have an easy way of telling, you can try to use the HTTP
TRACEmethod combined with theMax-Forwardsheader to get requests from the intermediate servers - that of course depends on the intermediate servers honouring TRACE and properly decrementing/reflecting requests back based on the header value.
No it's not. because it's my own Server, I've test it without proxy, I mean by VPN and do it direct to IP. and Ngnix are blocking the Large Request and evemore there's some request that can take later longer and ngninx stop it or cancel that request.