compose icon indicating copy to clipboard operation
compose copied to clipboard

`docker compose` with remote context resolves incorrect `.` or `${PWD}`

Open mangkoran opened this issue 3 years ago • 2 comments

Description

I want to deploy docker compose in remote server via SSH. There is volumes mounting in my docker-compose.yml. After I run docker --context <some-remote-context> compose up, I noticed that the container is not running at all. Initially I encountered this issue with my project which has not-small-not-big compose file, but I am able to reproduce it with the following compose file. Steps to reproduce the issue:

  1. Create a docker context entry with remote host SSH endpoint.
  2. Create dummy file in the remote host to mount into test container.
echo "some-file-content" >> some-content # or some-content.tmp to make it obvious it is not a directory
  1. Create docker-compose file with the following content:
version: "3"

services:
  some-container:
    image: alpine:3
    command: ["tail", "-F", "stop"]
    volumes:
      - ./some-file:/some-file:ro

  1. Up the compose file.
docker --context <some-remote-context> compose up

Describe the results you received: Container failed to run because of volumes entry resolved to local host directory, not remote host directory.

For example: Let say our current local PWD is /home/some-user. Our SSH remote host user is some-remote-user, which has PWD of /home/some-remote-user. If we try to docker --context ... compose up, the ./whalesay:/whalesay:ro will resolve to /home/some-user/whalesay, not /home/some-remote-user/whalesay. While the path will resolve to the local host path, the daemon is actually sent the compose command to the remote host, which, will resolve to invalid path (as the path will resolve to /home/some-user/whalesay, it will eventually create new directory in the remote host, as the remote host only has /home/some-remote-user/whalesay, which is the actual correct file that we want to volume mount).

Describe the results you expected: volume . and ${PWD} should resolve to the respective remote host directory.

Additional information you deem important (e.g. issue happens only occasionally): Actually I am not really sure if this behavior is the intended behavior. Because, it also came to my mind that the volume mount behavior with remote endpoint could mount local directory/file into remote host.

Output of docker compose version: Both local and remote host:

❯ docker compose version
Docker Compose version 2.2.2

Output of docker info: Local host:

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc., v0.7.1-docker)
WARNING: Plugin "/usr/libexec/docker/cli-plugins/docker-app" is not valid: failed to fetch metadata: fork/exec /usr/libexec/docker/cli-plugins/docker-app: no such file or directory
WARNING: Plugin "/usr/local/lib/docker/cli-plugins/docker-compose" is not valid: failed to fetch metadata: fork/exec /usr/local/lib/docker/cli-plugins/docker-compose: no such file or directory
WARNING: Plugin "/usr/local/lib/docker/cli-plugins/docker-scan" is not valid: failed to fetch metadata: fork/exec /usr/local/lib/docker/cli-plugins/docker-scan: no such file or directory

Server:
 Containers: 1
  Running: 0
  Paused: 0
  Stopped: 1
 Images: 91
 Server Version: 20.10.12
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 1e5ef943eb76627a6d3b6de8cd1ef6537f393a71.m
 runc version: v1.0.3-0-gf46b6ba2
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.15.6-xanmod2-microsoft-standard-WSL2
 Operating System: Arch Linux
 OSType: linux
 Architecture: x86_64
 CPUs: 16
 Total Memory: 3.825GiB
 Name: 8atagor
 ID: BIRV:YCTW:DRJL:VJLN:RL4J:NHNY:BUTA:3B7P:NAEJ:SDHW:23OS:X5DB
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: mangkoran
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support

Remote host:

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Docker Buildx (Docker Inc., v0.7.1-docker)
  compose: Docker Compose (Docker Inc., v2.2.2)
  scan: Docker Scan (Docker Inc., v0.12.0)

Server:
 Containers: 4
  Running: 3
  Paused: 0
  Stopped: 1
 Images: 4
 Server Version: 20.10.12
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc version: v1.0.2-0-g52b36a2
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
  cgroupns
 Kernel Version: 5.13.0-1007-gcp
 Operating System: Ubuntu 21.10
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 968.1MiB
 Name: ppiedufest-1
 ID: I3W4:642F:LR63:D57S:H5HM:Z6IQ:3BGC:AMIR:4OWD:OUTN:GTIC:6FPC
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Additional environment details:

mangkoran avatar Jan 03 '22 11:01 mangkoran

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Jul 10 '22 12:07 stale[bot]

This issue has been automatically closed because it had not recent activity during the stale period.

stale[bot] avatar Jul 31 '22 23:07 stale[bot]

This issue has been automatically closed because it had not recent activity during the stale period.

stale[bot] avatar Aug 13 '22 11:08 stale[bot]