roadmap
roadmap copied to clipboard
Original ip is not passed to containers ...
The issue affects macOS, Windows, and Linux. We'd like to see in the roadmap somewhere.
https://github.com/docker/for-mac/issues/180
See also moby/moby#15086
@aguedeney thanks for raising! I see that @stephen-turner has linked the Moby issue. We will track here as well :)
Is everyone here using http? Have had some discussion about explicit http support for Compose at least. In which case having layer 7 routers passing x-forwarded-for is an option, versus tcp changes.
Nginx or Traefic proxies for docker ( are loyal and reliable companions of any Dockerized HTTP server. You can find many ready-to-use examples of compose files using Google. The Dockers images are in the Docker Hub
I achieved a working solution with docker-compose by using the 'X-Real-IP' header provided by the nginx-proxy container's default configuration. Obviously this is a workaround, just thought I'd put it out there.
I achieved a working solution with docker-compose by using the 'X-Real-IP' header provided by the nginx-proxy container's default configuration. Obviously this is a workaround, just thought I'd put it out there.
It is still binding to the docker socket, which is not the best solution. Can anyone comment on how providers like traefik gets around security issues with mounting the socket (if they do)?
Every Kubernetes node has Kube-proxy running, Selinux built-in firewall and a member of Kubernetes cluster VPN. Traffic between Kubernetes nodes is filtered by Istio. NGInx and Traefic Ingresses have no direct access to Docker socket. The Kubernetes security is tested and proven.
Docker containers and pods never expose unix domain sockets. All Docker and Podman networking are TCP IP networking. There is no way to "EXPOSE" unix socket in the Dockerfile. Only Docker engine may expose API socker but User requests never uses Docker API. It is used only by orchestration engines. When Docker / Podman run on SeLinux nodes API sockets are protected by the native SeLinux security very well. Docker API can't extract and use HTTP headers from Docker API calls because the default Docker API host is unix socket. The Docker API host is configured in the daemon.json. It uses TLS 1.1 at least via port 2376. Port 2375 can be used without TLS but with warnings in Docker info output. On the client side security-related stuff is configured in the Docker Context and Kubernetes Context. It is possible to configure other hosts as API sockets in the daemon.json but only Unix or TCP sockets, not HTTP.
In host mode it is passed. But host mode is only available on Linux.
A special situation is when the docker host is ipv4 and ipv6 (which is quite normal today) and the containers are ipv4 only.
Then clients connecting per ipv4 are connecting the host per ipv4 (host mode/Linux only) and the correct client source address is seen in the container.
But if the client connects with ipv6 it is routed through the docker_gwbridge per ipv4 and the container can only see the IP address of the bridge. MS would say this is a feature :smile:
So would be nice we could enable ipv6 in the swarm containers and support in compose >= 3.
It seems to come something ipv6 libnetwork related in the next release v20.10... But yes there should be some information for the many open ipv6/source ip related and duplicate-closed issues with hacks and special case workarounds on moby/moby. Could fill a book. At least a more complete documentation at https://docs.docker.com/config/daemon/ipv6/
MS would say this is a feature :+1:
M$ would be happy and state in their advertisement: " We finally introduced that thing "privacy" ". Since the real ip isn't shown it's a good side-effect (who needs the real ip.... no real compliance problems...)
Realtalk: It would be awesome if it's possible to work around it by using host-networking and wiring it to other containers, but since that's only possible in bridge mode as far as I know and the Docker Docs I've really tried to read (and understand) seem to state that too. TL:DR.: it sucks!
We need a fix.... and a date ASAP. When can we expect the new docker (engine?) version? And when does it include the "Privacy on" Switch or better the "turn off non-compliance mode"? ;-)
Some use cases that I constantly have a problem with:
- Filtering network traffic based on the IP (corporate (internal) network or public (internet))
- Used for different apps behind a reverse proxy to limit the accessibility by the public
- Identifying (micro)services
- We use a workflow engine that get's triggered by other micro services. Would be great to identify the caller's by IP to distinguish which micro service triggered the action - very useful for debugging!
With rootless docker, the source IP is also not passed to the container (e.g. when trying to log access to a reverse-proxy). This is the default behaviour, however it would be great if this was better documented - took lots of searching to figure out what was happening.
Hope this can be implemented soon.
This is badly affecting our sticky services deployed with docker. Now we need to add extra configurations and placement for those services, since host network mode is not suitable for production deployment.
We just came across this issue as well and it's a serious issue for us as we need the client ip for security reasons.
This is real issue we need to have the client IP for security reasons but in bridge mode it is not possible hoping for solution soon in bridge mode
to solve the problem, for now, set up a reverse proxy outside the swarm that keeps track of the IP, and that can forward it as a header to the service if necessary.
we have a gigantic application for capturing documents and ecm. we need this functionality to validate snmp data from multifunctional devices, for security reasons, the licenses are based on information in the mib, such as serial and mac address. I came across this problem, and the worst is not documented, I spent days trying somehow. unfortunately when researching more deeply this problem dates back more than 4 years, and until today it has not been implemented, I believe that they are not caring about this.
Pardon my lack of networking knowledge (a weakness I am actively working to fix), but why does the X-Forwarded-For header get changed in between HAProxy and my container? option forwardfor
should set it correctly since HAProxy's logs show the correct ip address.
I thought the path to my container was taken care of by iptables routing/forwarding the data to the correct location. And since headers are part of http, wouldn't iptables just ignore them?
My specific case, in case I'm missing something obvious: Ubuntu 20.04 running HAProxy and Docker normally (no Kubernetes, or HAProxy in a container.) HAProxy sends traffic to a docker-compose based php:7-apache based app. In the app logs I just see the Docker network's gateway, and the container's Docker ip. While in /var/log/haproxy.log
, I see my client ip just fine.
(Edit: Just wondered to myself if the forums would be a better place for this post, but my last few posts there didn't get any replies, and my question is directly related to this issue.)
@jerrac Yes it would be and not asking in an existing issue targeting a different problem.
For your problem... Apache doesn't log the ForwardFor address by default. I think you have to modify your apache config. But let us stop here.
I have the problem "original IP is not passed to containers" on docker desktop in windows. The problem made my smtp server function as an open relay since it thought that all connections where "local". (due to the mailu default config) In linux (centos) the bridge network configuration still shows the correct source address (don't need to use host there, which is good). Will probably run docker in linux under system-v instead if this is not solved anytime soon..
I just had this issue with netflow UDP packets. It was working properly for along while, then I moved container host to a new subnet and it continued to work until I moved the docker-compose instructions to a consolidated file with other containers. For some reason at that point I was seeing one of the two source IPs (the primary router) that go to this service as the Docker gateway IP. The other (the secondary router) was still correct.
I restarted the netflow service on the primary router and the container started seeing it as the correct source IP again, but the secondary router source IP now became the Docker gateway IP inside the container. Restarted the netflow service on the secondary router and both now began showing up correctly in the container again.
@DarioFra
I have the problem "original IP is not passed to containers" on docker desktop in windows. The problem made my smtp server function as an open relay since it thought that all connections where "local". (due to the mailu default config)
It helps to include host IP when binding SMTP port from containter.
For example I have 2 entries under ports:
section:
ports:
- '127.0.0.1:25:25'
- '<external-IP>:25:25'
When there is connection to port on external IP, source address is preserved properly and authentication is required. When there is local delivery, it is sent through 127.0.0.1:25 and source IP is masqueraded to Docker's bridge interface IP (which can be distinguished and mail can be accepted without authentication if needed).
While waiting for an official solution to this issue, is there any acceptable workaround? I've read in many places that having an nginx reverse proxy is one possible solution. Any recommended course of action from Docker?
is there any acceptable work around?
Disable the userland proxy.
is there any acceptable work around?
Disable the userland proxy.
To underline the "acceptable"
is there any acceptable work around?
Disable the userland proxy.
To underline the "acceptable"
Not to mention, doesn't seem to work (at least on Windows Docker Desktop) with Nginx.
is there any acceptable work around?
Disable the userland proxy.
To underline the "acceptable"
Not to mention, doesn't seem to work (at least on Windows Docker Desktop) with Nginx.
Doesn't work on non-linux OS's.
is there any acceptable work around?
Disable the userland proxy.
To underline the "acceptable"
Not to mention, doesn't seem to work (at least on Windows Docker Desktop) with Nginx.
Doesn't work on non-linux OS's.
Docker is anyways buggy on non *nix OS. Also there's no reason for using Windows :P
is there any acceptable work around?
Disable the userland proxy.
To underline the "acceptable"
Not to mention, doesn't seem to work (at least on Windows Docker Desktop) with Nginx.
Doesn't work on non-linux OS's.
Docker is anyways buggy on non *nix OS. Also there's no reason for using Windows :P
My issue was specifically with Docker for Windows, should have mentioned that in my original comment. So if it doesn't work there it's not much of a solution in my case (and no I can't just choose Linux, I don't control the deployment machine)
Oh my freaking god... Im not often freaking out, but c'mon, what is the problem here? Originally coming from PiHole, searching for a serious solution and there is not a single fart coming from the devs? Gawd... Moving to good ol' style VMs then, if something like this basic doesn't work properly.