nginx-proxy-manager
nginx-proxy-manager copied to clipboard
basically the application is broken.
I just wasted more than one hour trying to update my proxies.
I kept getting the same error again and again and again. Could not delete file. This is the absolutely basic functionality that should be offered by the nginx proxy manager right? I would expect a proxy manager to be able to edit the proxies.
But it basically can't do this basic thing. This added to the even worse handling of the SSL certificates led me to the conclusion that I need a better solution.
So to anyone reading this: look for other solutions: caddy, traefik, whatever. Do not waste time here.
And to the developers, please do the world a favor and archive this project.
[1/24/2024] [5:07:50 PM] [Nginx ] › ⬤ debug Deleting file: /data/nginx/proxy_host/14.conf
[1/24/2024] [5:07:50 PM] [Global ] › ⬤ debug CMD: /usr/sbin/nginx -t -g "error_log off;"
[1/24/2024] [5:07:50 PM] [Nginx ] › ⬤ debug Deleting file: /data/nginx/proxy_host/14.conf
[1/24/2024] [5:07:50 PM] [Nginx ] › ⬤ debug Could not delete file: {
"errno": -2,
"code": "ENOENT",
"syscall": "unlink",
"path": "/data/nginx/proxy_host/14.conf"
}
[1/24/2024] [5:07:50 PM] [Nginx ] › ⬤ debug Deleting file: /data/nginx/proxy_host/14.conf.err
[1/24/2024] [5:07:50 PM] [Nginx ] › ⬤ debug Could not delete file: {
"errno": -2,
"code": "ENOENT",
"syscall": "unlink",
"path": "/data/nginx/proxy_host/14.conf.err"
}
[1/24/2024] [5:07:50 PM] [Global ] › ⬤ debug CMD: /usr/sbin/nginx -t -g "error_log off;"
[1/24/2024] [5:07:51 PM] [Nginx ] › ℹ info Reloading Nginx
[1/24/2024] [5:07:51 PM] [Global ] › ⬤ debug CMD: /usr/sbin/nginx -s reload
[1/24/2024] [5:07:57 PM] [Global ] › ⬤ debug CMD: /usr/sbin/nginx -t -g "error_log off;"
[1/24/2024] [5:07:57 PM] [Nginx ] › ⬤ debug Deleting file: /data/nginx/proxy_host/14.conf
[1/24/2024] [5:07:57 PM] [Nginx ] › ⬤ debug Could not delete file: {
"errno": -2,
"code": "ENOENT",
"syscall": "unlink",
"path": "/data/nginx/proxy_host/14.conf"
}
Checklist
- Have you pulled and found the error with
jc21/nginx-proxy-manager:latest
docker image?- Yes : image: jc21/nginx-proxy-manager:2.11.1
- Are you sure you're not using someone else's docker image?
- Yes
- Have you searched for similar issues (both open and closed)?
- Yes. There are open and it seems that the team does not care.
Describe the bug
try to edit a proxy. It is impossible. Check the logs, see the result above.
Nginx Proxy Manager Version
image: jc21/nginx-proxy-manager:2.11.1
To Reproduce Steps to reproduce the behavior:
- Go to '...'
- Click on '....'
- Scroll down to '....'
- See error
Expected behavior
Maybe it should work?
Screenshots
Operating System
Additional context
Same experience, downgraded to 2.9.22
. Seems to work better but I'm considering migrating away as NPM requires more maintenance than plain NGINX :/
I wish I could say I only wasted one hour on this. Even 2.9.22 I have the same error. Maybe because I'm using the ARM version
I can't even add new hosts now - it started initially when I mistakenly request a certificate before changing it in DNS thus leading to an error. From then on whatever I do even when using completely different domains and trying to add them it all fails with "Internal error" and the message shown above.
2.10.4 doesn't seem to have this issue. Perhaps try reverting to that version in the meantime.
I looked at the code - the error is logged (I would simply add a missing "if" to check if the file to be deleted actually exists) but this does not cause any other issue - it's not aborting because of that - it just logs the error.
The problem I faced is that this tool is extremely optimistic as such that it does not actually care about any errors returned by letsencrypt and doesn't give any more details on this. For me the real reason was this: "Some challenges have failed." which came after this error. It also logged "Saving debug log to /tmp/letsencrypt-log/letsencrypt.log" - when looking at this log I found out that for whatever reason Let's Encrypt used the old server IP and not the new one even though the NS was not changed, I changed the IP in both NS and when querying them with "dig" the correct new IP was returned - in WHOIS those NS were there for years. But that's another issue.
So for me the main problem is that this tool does this:
- always creates the host entry
- when Let's Encrypt fails it only shows "Internal error" which is actually wrong, because it's an external error caused by Let#s Encrypt - this tool has to parse the Let's Encrypt result and show the corresponding message to the user, but it appears it was never implemented - that's what I call extremely optimistic
- when you try too often Let's Encrypt itself locks you out for one hour for the given host - something you'll also only see in the log
@jucajuca Do you also get the error "Some challenges have failed" after this error? If you do you should look at the letsencrypt.log
i feel your pain. Lately I did some research for a replacement and I think this one looks promising. I just leave the link in case it might be an option for you guys.
Webseite: https://zoraxy.arozos.com/
Github: https://github.com/tobychui/zoraxy
@blackoutland
I somewhat agree with you conclusions, I got side tracked by the error too, although at least something popped up.
The logs for almost everything are empty., docker compose logs
gave me small hints.
In my case I made an error myself in configuration, but it let me request certs, set's up the host. But looking in the cert itself, dns failed. There's nothing in the logs anywhere, and you don't get a warning. My bad, I set the docker NPM container port to 8080 on the host, that's a nono ;).
Anyway it works now, but yeah logging and letting the user know, can obviously be better.
Having the same issue but my letsencrypt renewal went through smoothly. The error is as non informative as it can get, because it tries to delete a file which is not even existing in my container (it is /data/nginx/proxy_host/3.conf in my case).
Altough I do like the UI, I am as well thinking to switch back to either a plain NGINX or to jwilder-nginx-proxy as I am running everything dockerized.
Update: I could solve it through deleting the problematic entry and recreating it. From my (users) point of view nothing changed but it seems that something went corrupt under the hood.
Still it is very sad because when debugging is almost impossible, you can just hope to never encounter bugs.
though it has worked well for me but i too face the same problem hate to migrate though
Hello, I have the same error as the original post, but it seems to only affect hosts that have custom location in use. After deleting the custom locations the proxy host works fine again.
I cannot work without custom locations. For me it is key feature. I need to proxy /uploads folder to another service. Is there any workaround for this to work, but without custom locations?
I cannot work without custom locations. For me it is key feature. I need to proxy /uploads folder to another service. Is there any workaround for this to work, but without custom locations? put uploads on a different subdomain? Or otherwise plan nginx (https://hub.docker.com/_/nginx ) and create your own nginx proxy manager. You can re-use some of the configs inside the nginx-proxymanager container. There's no magic involved here. It's just a nice graphical ui over a vanilla nginx.
As a workaround, I downgraded to version 2.10.4 with docker-compose. I hope there are no corruptions in the backend data ??
As a workaround, I downgraded to version 2.10.4 with docker-compose. I hope there are no corruptions in the backend data ??
Remember this is usually a directly internet-facing service. I wouldn't do that (/for too long) for security reasons. But that's just personal preference I guess.
Remember this is usually a directly internet-facing service. I wouldn't do that (/for too long) for security reasons. But that's just personal preference I guess.
Thx. Currently, I have no other choice. Just knowing that backend data is not corrupted is OK. I will move to Caddy in the meantime.
Remember this is usually a directly internet-facing service. I wouldn't do that (/for too long) for security reasons. But that's just personal preference I guess.
Thx. Currently, I have no other choice. Just knowing that backend data is not corrupted is OK. I will move to Caddy in the meantime.
In that case I can understand. Regarding your question about the backend data I really cannot answer as I'm not a dev on this project. But I wouldn't expect. I did the exact same rollback to 2.10.4 by only changing the latest tag to 2.10.4 and then docker compose up -d. It worked fine for the week I used it like that. After that I migrated to plain nginx.
@jc21 can you archive the application? it is not maintained and is not working anymore.
can you archive the application? it is not maintained and is not working anymore.
Just out of curiosity. Why, is it unmaintained ? The last commits were two weeks ago. And it is a quite popular software. There are 100 M+ pulls in docker hub and last update was 2 days ago.
@bkilinc have you noticed the last 50+ issues? many point to the same problem. One issue says that basically the last x Versions are not working. The application is simply not working and evidently the new commits are not fixing the issues, so yes, you can commit and commit, does not mean that it is a working or quality application. It just mean that someone is writing some sort of code. Could be an update to the README.
Pulls... I can pull easily 10 images a day. A k8s cluster will pull 100s if not 1000s a day...
I strongly recommend to look for other solutions. I also worked with traefik and never experience such horrible issues.
I strongly recommend to look for other solutions. I also worked with traefik and never experience such horrible issues.
Thx. I was just asking. I will place my bet on caddy. I don't trust anything with GUI. esp. for basic services.
Facing the same issue.. Any fixes?
Rolling back to 2.10.4 worked for me
Same for me, the error is shown but it's working on 2.10.4
@bkilinc have you noticed the last 50+ issues? many point to the same problem. One issue says that basically the last x Versions are not working. The application is simply not working and evidently the new commits are not fixing the issues, so yes, you can commit and commit, does not mean that it is a working or quality application. It just mean that someone is writing some sort of code. Could be an update to the README.
Pulls... I can pull easily 10 images a day. A k8s cluster will pull 100s if not 1000s a day...
I strongly recommend to look for other solutions. I also worked with traefik and never experience such horrible issues.
+1
@jc21 can you archive the application? it is not maintained and is not working anymore.
I get where you're coming from, but the latest release is just from 2 months ago and the latest commits date back to 4 weeks ago. I think this project is just a victim of it's own success, which is not manageable by a single person.
@jc21 can you archive the application? it is not maintained and is not working anymore.
I get where you're coming from, but the latest release is just from 2 months ago and the latest commits date back to 4 weeks ago. I think this project is just a victim of it's own success, which is not manageable by a single person.
Despite that, it quite literally does not work on a single machine that any possible target user wants to put it on. Moreover, this is still the case 3 whole months after this application breaking issue has been opened. Worse yet is the fact that despite the developers 'maintaining' the project they haven't responded to any of these issues. A project without support for its outstanding issues that make it unusable, is dead. A project that is unusable and has no plans to fix itself in the near future, is dead.
I'm migrating from an earlier version, currently version 2.11.1, and I have the same problem as the owner, I can't delete new files when creating new records. Maybe the data volume has some argument logic wrong, or if there is a problem with the judgment somewhere,
I'm migrating from an earlier version, currently version 2.11.1, and I have the same problem as the owner, I can't delete new files when creating new records.
Don't be angry, I encountered the same problem as you, I downgraded 2.10.4 version normal, npm is a very good product, and open source, software encounter some bugs is normal, maybe somewhere if there is a problem, we can find the problem together, fix it.
Well this is awkward. This isn't my first open source package and it won't be my last but it always baffles me how the public love to criticise the project that is given to them for free and out of the goodness of their hearts. And before anyone talks about the donations, you can see from the donations page they are few and very far between.
Many thanks to those coming to my defense though, I really appreciate that :)
@jucajuca No I'm not going to archive this because you and a small percentage of users are having an issue. Yes I do maintain as much as I can given I too have a life to live. I've been overseas and without a computer for April.
@Freekers you are absolutely correct. This is only maintained by me. I rely on pull requests from the community for things I cannot test, mainly DNS providers. I had help for a while from someone I've never met, but they too have their own life to live. Sadly no-one has offered since.
@CorneliusCornbread No I'm not going to respond to all of the issues. I receive a LOT of github emails everyday I am not put on this earth to fix everyone's problems all on my own. As for the project being unusable, I cannot disagree enough. I deploy this project on 4 different homes and multiple architectures. I eat my own dog food.
But hey this project and the limited developer effort you're getting isn't for everyone.
As for the initial deletion issue itself, can someone tell me if they are depoying the project with PUID/PGID set or running as root? Also some steps to reproduce would be nice.