for-aws
for-aws copied to clipboard
Unable to mount EBS volume.
Expected behavior
My volume to be mounted inside the container when defining it either on the CLI or through a compose-file.
Actual behavior
A diversity of errors when mounting the volume.
$ docker volume create -d cloudstor:aws -o size=1 test
$ docker volume ls -fname=test
DRIVER VOLUME NAME
cloudstor:aws test
$ docker run -ti --rm -v test:/x alpine /bin/sh
docker: Error response from daemon: VolumeDriver.Mount: error mounting volume: failed to open device to probe ext4: open /dev/xvdf: no such file or directory.
When attempting to start a docker swarm service, which refers to a volume; the volume is also created successfully but unable to be mounted in the resulting service.
Sometimes the error is different, and does it not complain about the device name but about the filesystem (ext4).
Information
docker 18.06.1-ce docker-compose 1.21.2 docker4x/cloudstor 18.06.1-ce-aws1
plugin installed as follows:
$ docker plugin install --alias cloudstor:aws --grant-all-permissions docker4x/cloudstor:18.06.1-ce-aws1 AWS_REGION=eu-west-1 CLOUD_PLATFORM=AWS AWS_STACK_ID=eu-west-1 EFS_SUPPORTED=0
$ docker plugin ls
ID NAME DESCRIPTION ENABLED
fc5f08324f3b cloudstor:aws cloud storage plugin for Docker true
IAM policy created as required (Conform the CF template). Even went one step further and set the permissions to allow ec2:*.
The problem seems to come forth out of the fact that AWS has decided to name their devices differently. On the instance they are named '/dev/xvdf', whereas in the OS the device is named /dev/nvme2n1.
$ docker run -ti --rm -v test:/tmp alpine /bin/sh
docker: Error response from daemon: VolumeDriver.Mount: error mounting volume: failed to open device to probe ext4: open /dev/xvdf: no such file or directory.
$ ln -sf /dev/nvme2n1 /dev/xvdf
$ docker run -ti --rm -v test:/tmp alpine df -h /tmp
Filesystem Size Used Available Use% Mounted on
/dev/xvdf 975.9M 2.5M 906.2M 0% /tmp
Its the same problem as #184