docker-volume-sshfs
docker-volume-sshfs copied to clipboard
Error response from daemon: VolumeDriver.Mount: exit status 1%!(EXTRA []interface {}=[]).
I'm able to create the volume successfully, but when I try to utilize it, I get the following error:
create:
docker volume create -d vieux/sshfs -o [email protected]:/mnt/docker/vieux_sshfs/nexus3/nexus-data \
-o IdentityFile=/root/.ssh/id_rsa.pub \
-o transform_symlinks \
-o follow_symlinks \
-o allow_other \
-o reconnect \
-o StrictHostKeyChecking=no \
-o kernel_cache \
-o cache=yes \
-o auto_cache \
-o big_writes \
-o compression=no \
sshvolume_nexus3
sshvolume_nexus3
[root@docker-test ~]# docker run -d -p 8081:8081 --name nexus -v sshvolume_nexus3:/nexus-data sonatype/nexus3
docker: Error response from daemon: VolumeDriver.Mount: exit status 1%!(EXTRA []interface {}=[]).
See 'docker run --help'.
this is my first foray into vieux/sshfs, so I'm not entierly sure this is a bug.
Maybe it caused by the remote path is not exists.
Try to create the remote path first, then mount the volume.
I already double checked that both the remote and locks dirs exist.
On Jan 5, 2018 3:14 AM, "Athurg Feng" [email protected] wrote:
Maybe it caused by the remote path is not exists.
Try to create the remote path first, then mount the volume.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/vieux/docker-volume-sshfs/issues/45#issuecomment-355496593, or mute the thread https://github.com/notifications/unsubscribe-auth/AD5_bWRnKM7921wi_wG37eX39XKhajoyks5tHdnSgaJpZM4RSGl1 .
@swvajanyatek - did you get this figured out? I am having exactly the same error. I am also new to this plugin and docker.
@jfinlins - unfortunately, no. i moved forward with sshfs from inside the container.
the same issue.
Getting the same issue
seems like plugin doesn't work...((( got the same issue...
@swvajanyatek, try changing id_rsa.pub to id_rsa. When SSHing into a server, you use the private key; the contents of the public key go into the authorized_keys file.
You can also update your settings to automatically include you key by running docker plugin set vieux/sshfs sshkey.source=/home/<user>/.ssh/
I just started playing around with sshfs today on a 3-node local cluster using docker-machine. Everything is working as advertised, although I haven't tried Swarm Mode yet.
Same issue here.
Using sshfs inside container works with /dev/fuse mounted and --previleged option.
docker run -it -v sshvolume:/tmp busybox pwd Unable to find image 'busybox:latest' locally latest: Pulling from library/busybox d070b8ef96fc: Pull complete Digest: sha256:2107a35b58593c58ec5f4e8f2c4a70d195321078aebfadfbfb223a2ff4a4ed21 Status: Downloaded newer image for busybox:latest docker: Error response from daemon: VolumeDriver.Mount: exit status 1%!(EXTRA []interface {}=[]).
Same issue (
I have the same issue here : docker run -it -v sshvolume:/mnt/here busybox ls /mnt/here
run on latest debian
Same here with Docker version 18.02.0-ce, build fc4de44
Same problem here with 17.12.1-ce.
Same issue for me with Docker version 17.12.1-ce, build 7390fc6
Same here (17.04.0-ce, build 4845c56). Installing the plugin with DEBUG=1, doesn't change anything. Paths are ok, "normal" sshfs is working
Any solutions or at least ideas how to debug this?
Found a solution:
- update 2018-05-02: The error-message was better here, for my system it said "connection reset by peer", which means for this specific system, that password wasn't found (or other kind of login problem)
- I normally use sshkey.
- At first time I have forgotten to set the sshkey (as stated in the Readme here on github)
- I thought I could set it afterwards with "docker plguin set", but it seems that this didn't work.
- I deleted this plugin and installed it with "docker plugin install vieux/sshfs sshkey.source=/home/
/.ssh/", like stated on the first site - That solved my problem, perhaps you are having something similar
To me it was a problem of understanding the needed paths correctly. I could clarify this thanks to #58, but only partially:
sshkey.source, a plugin setting, is where keys are taken, to be copied into the containers. This will contain the keys of the hosts+filesystems that you want to mount (it should be a good idea to setauthorized_keystoo under this directory, putting the public keys of the same hosts in it).docker volume createwill work with-o IdentityFile=/root/.ssh/sshserver_rsa, ifssh.keysourcecontainssshserver_rsa. I cannot understand why (I don't see any/root/.sshwhen I connect to the container that is mounting the sshfs volume correctly)- This is also relevant, if you want to connect to the filesystem of your hosting OS (I'm testing this for the first time, so that's a good target): you need to set up the server IP correctly (in the case of macos, its IP works, even if it isn't public, but something like
192.168.*.*, your case, it seems, might be different).
I think this should be clarified in the README.
I see:
docker run -it -v sshvolume:/testfolder busybox ls /testfolder
docker: Error response from daemon: error while mounting volume '/mnt/volumes/099665408449fffc8b87e51dd9f93d85': VolumeDriver.Mount: sshfs command execute failed: exit status 1 (read: Connection reset by peer
if I use key from non-root user. Docker run volume plugin from root and I have a problem with permission for secret key.
$ ls -luha /home/dev/.ssh/id_docker_to_dev_service
-rw------- 1 dev dev 3,2K jun 13 12:04 /home/dev/.ssh/id_docker_to_dev_service
Solution (work for me)
Use ssh for root user. Create a new key pair and use it:
docker plugin install vieux/sshfs DEBUG=1 sshkey.source=/root/.ssh/
sudo su
# ssh-keygen -t rsa -b 4096 -C "root@localmachine to dev@service"
Enter file in which to save the key (/root/.ssh/id_rsa): /root/.ssh/id_root_to_dev_service
# ssh-copy-id -i /root/.ssh/id_root_to_dev_service dev@remote-ip-here-for-access-within-password
exit
docker volume create -d vieux/sshfs --name sshvolume -o sshcmd=dev@remote-ip-here:/remote-folder-on-service -o IdentityFile=/root/.ssh/id_root_to_dev_service
docker run -it -v sshvolume:/testfolder busybox ls /testfolder
@bscheshirwork Hello, I also encountered this problem, but I have this problem under the root user, is there any way to solve it? Or where is the problem?
@houxiyao try to check ssh connection from root to destination without docker sshfs plugin (try to only connect by ssh)
also check config sshkey.source=/root/.ssh/ and corresponded folder /root/.ssh/
Seems there is bug in vieux/sshfs volume driver plugin. Tried below steps as part of troubleshooting:
- Enabled ssh authentication between two nodes (manager and worker) with same docker version.
- Used same user id for ssh (docker).
- Created directory (/mnt/data) in worker node with full permissions
- Created volume with sshfs volume driver with above mentioned path
$ docker volume ls (manager node)
DRIVER VOLUME NAME vieux/sshfs:latest ssh-vol
$ docker volume ls (worker node)
DRIVER VOLUME NAME
local ssh-vol
- When I try to run container using the created volume (ex: ssh-vol), throwing an error in manager node: Error response from daemon: error while mounting volume '/mnt/sda1/var/lib/docker/plugins/0834c105bb86b3d1f26d134cbfed823af1032fe77844a4ba6c8b294dd483a2c8/rootfs': VolumeDriver.Mount: sshfs command execute failed: exit status 1 (read: Connection reset by peer
- When I create the service with 2 replicas, both replicas are running fine in worker node, by failing to start 1 on manager node, since manager availability is "active"
Is there any way to look around to solve ....? please suggest
I've added single quotes and it worked
docker volume create -d vieux/sshfs \
-o sshcmd='[email protected]:/pictures' \
-o password='secret' \
pictures
Found this that can potentially be helpful ...
Look at : https://github.com/vieux/docker-volume-sshfs/issues/19#issuecomment-748609520