cli icon indicating copy to clipboard operation
cli copied to clipboard

Auto assignment of already used port gives no error in cli

Open Carael opened this issue 3 years ago • 1 comments

Description I try to start a docker container without passing the host port. I expect the docker will automatically assign a free port and run the container. That's the case most of the time. After some time though the automatic assigned port (which with every run is incremented +1) happens to be not available.

The behavior now when I run e.g.: docker run -d -p 27017 mongo that docker starts the container without any error or warning: 73d28c33cfb7 mongo:latest "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:49169->27017/tcp squa_mongodb_637498526118689048_fd3c7e4adda39eec651287ad30 I try to connect to the image but I got an connection error. The error message is to be found in the docker log file: [14:52:37.339][GoBackendProcess ][Error ] msg="unable to expose port TCP 0.0.0.0:49169 -> 127.0.0.1:49169: listen tcp 0.0.0.0:49169: bind: An attempt was made to access a socket in a way forbidden by its access permissions." When I try to run the command with explicit port: docker run -d -p 49169:27017 mongo It fails already while creating with the same message from the log.

Steps to reproduce the issue:

  1. Make sure the next port that will be automatically assigned by docker is already taken/not available
  2. Run the docker run -d -p 27017 IMAGE_NAME
  3. Try to connect to the exposed port

I'm aware that the process of determining an free port may be not trivial. But I think the current behavior is incorrect. I think the correct behavior would be:

  • crash and inform me on the moment of creation that the port is taken
  • try next until got a free port, break when timeout reached

I'm also concerned how the auto port range is set and when does it resets. In my case a workaround was to completely uninstall docker and (that's important) delete everything docker related from ProgramData/Program Files/AppData. Only then the counter was reseted.

Just to be clear- we use this behavior in Squadron to spin up containers with resources for integration tests. It happened on developer pc that was running large amount of unit tests so hundreds if not thousands of containers were sinned until this happened. But either way it's an issue.

Output of docker version:

Client: Docker Engine - Community
 Cloud integration: 1.0.7
 Version:           20.10.2
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        2291f61
 Built:             Mon Dec 28 16:14:16 2020
 OS/Arch:           windows/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.2
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       8891c58
  Built:            Mon Dec 28 16:15:28 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Output of docker info:

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
  scan: Docker Scan (Docker Inc., v0.5.0)

Server:
 Containers: 7
  Running: 3
  Paused: 0
  Stopped: 4
 Images: 16
 Server Version: 20.10.2
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc version: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.19.104-microsoft-standard
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 24.84GiB
 Name: docker-desktop
 ID: 27ZQ:PHKD:H2HX:M6O2:HVZD:6BFD:XUUQ:JXZF:BZ3S:UUGL:YPNS:NJ3T
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No blkio weight support
WARNING: No blkio weight_device support
WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

Additional environment details (AWS, VirtualBox, physical, etc.): OS Name Microsoft Windows 10 Pro Version 10.0.19042 Build 19042

Carael avatar Feb 25 '21 15:02 Carael

@Carael I ran into this problem. I originally thought it had something to do with our dev tooling (Visual Studio 2022). The details of my story can be found here: https://github.com/microsoft/DockerTools/issues/326

A highlight from my issues is that VS 2022 uses the -P flag which makes matters worse because you don't know something went wrong until a browser opens. Identifying a specific port early on shows the error bind: An attempt was made to access a socket in a way forbidden by its access permissions. as was previously mentioned causes the build to fail before the browser auto-launches.

Another highlight, I noticed there is a port range 49152 - 49869 (not 100% solid range) that the ports report back with this error. When I run netstat -aon - I did not see ports in use by any process that I was trying to use in that range. So it makes me think that "in-use" might not be the same as "forbidden by its access permissions", purely conjecture on my part.

reecebradley avatar Aug 09 '22 16:08 reecebradley