vic icon indicating copy to clipboard operation
vic copied to clipboard

vSphere Integrated Containers does not support mounting directories as a data volume

Open shahharsh opened this issue 7 years ago • 14 comments
trafficstars

At least for a start mounting directories from NFS should be made available

shahharsh avatar Mar 10 '18 00:03 shahharsh

@matthewavery Would you take a look at this please, thanks!

sgairo avatar Mar 14 '18 14:03 sgairo

@shahharsh There is support for NFS volume stores, see the volume store docs to allow for concurrent access to volumes.

Please would you provide more detailed use cases or CLI/API examples of what you're expecting to be able to do that is not available.

hickeng avatar Mar 16 '18 21:03 hickeng

Okay, I'm trying to move my existing Artifactory docker service to VIC base environment using VCH. My data is located on NFS storage, the same will be accessed by Nginx container too, but how do I move my service existing data to NFS datastore, I tried copping the content to NFS data store and tried to mount copied NFS datastore volume on my VCH-artifactory container and my service failed to find any data from that NFS store. Instead, it created a new volume at that level.

Second usage, Also Will be great if we can mount VCH directories to containers. For example, to run systemd in the container you need /sys/fs/cgroup mounted as read-only. I know systemd is not for docker but I have service which needs systemd and I have been using it on a regular docker host, need the same option for VCH

shahharsh avatar Mar 19 '18 17:03 shahharsh

@shahharsh

For the migration I'd suggest:

  1. create VCH with NFS volume store specified, e.g. --volume-store=nfs://server/share/point:nfs. This will create a basic directory structure under that NFS share for metadata and for volume data.
  2. create a volume on that volume store, e.g. docker volume create --name artifactory-data --opt VolumeStore=nfs. This will create metadata and data directories for the volume under the share from (1).
  3. copy the artifactory data to the data directory for the artifactory-data volume.
  4. create a container that uses that volume, e.g. docker create -v artifactory-data:/mnt/data arifactory-image

For the second, every containerVM is running it's own Linux instance and cgroups hierarchy is already mounted inside the containerVMs:

# docker run -it ubuntu
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
22dc81ace0ea: Pull complete
1a8b3c87dba3: Pull complete
91390a1c435a: Pull complete
07844b14977e: Pull complete
b78396653dae: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:52286464db54577a128fa1b1aa3c115bd86721b490ff4cbd0cd14d190b66c570
Status: Downloaded newer image for library/ubuntu:latest
root@ee07251c985b:/#
root@ee07251c985b:/# df
Filesystem     1K-blocks   Used Available Use% Mounted on
devtmpfs          992788      0    992788   0% /dev
tmpfs            1026484      0   1026484   0% /dev/shm
tmpfs            1026484      0   1026484   0% /sys/fs/cgroup
/dev/sda         7743120 114668   7212068   2% /
tmpfs              81920  69948     11972  86% /.tether
root@ee07251c985b:/#

You can run systemd directly but as you can see it's not entirely happy in this image - I've not dug into why but at a minimum dbus isn't running. Unfortunately running this will cause systemd to blow away the network configuration so you end up with empty /etc/hosts and /etc/resolv.conf so if you try it I suggest dropping the units that manage that part of things. It may be worth us looking into what's needed to allow /lib/systemd/systemd --user to run but I have not done so at this time:

root@ee07251c985b:/# /lib/systemd/systemd --system
systemd 229 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN)
Detected virtualization docker.
Detected architecture x86-64.
Set hostname to <ee07251c985b>.
Initializing machine ID from random generator.
kmod-static-nodes.service: Main process exited, code=exited, status=203/EXEC
kmod-static-nodes.service: Unit entered failed state.
kmod-static-nodes.service: Failed with result 'exit-code'.
systemd-journal-flush.service: Main process exited, code=exited, status=1/FAILURE
systemd-journal-flush.service: Unit entered failed state.
systemd-journal-flush.service: Failed with result 'exit-code'.
systemd-update-utmp.service: Main process exited, code=exited, status=1/FAILURE
systemd-update-utmp-runlevel.service: Job systemd-update-utmp-runlevel.service/start failed with result 'dependency'.
systemd-update-utmp.service: Unit entered failed state.
systemd-update-utmp.service: Failed with result 'exit-code'.
getty-static.service: Main process exited, code=exited, status=1/FAILURE
getty-static.service: Unit entered failed state.
getty-static.service: Failed with result 'exit-code'.
Startup finished in 993ms.

hickeng avatar Mar 19 '18 22:03 hickeng

I am trying to run SUSE sles12 with init.d available inside docker container. The image I have installs systmed packages and mounts /sys/fs/cgroup from the host on a normal docker host this works perfectly fine. But I am not sure how can I relate the mounting to VIC

shahharsh avatar May 22 '18 19:05 shahharsh

For me option 1 from here is really missing https://github.com/vmware/vic/issues/3622#issuecomment-273617952 Is this on the roadmap at all (or please tell me I'm missing something and its possible)? I don't want / can't have my data living inside a VIC storage mount, I need to mount an existing share into the container so it can still be accessible outside of the VIC environment.

jmccoy555 avatar Oct 23 '18 21:10 jmccoy555

I've a problem with running traefic inside a VCH, I need to get the docker.sock mounted into the container :-/ dockervch2 create -p 8080:8080 -p 80:80 -v traefik:/etc/traefik -v /var/run/docker.sock:/var/run/docker.sock --name traefik traefik

but I also get the mentioned error? Is there a workaround?

meeximum avatar Mar 20 '19 15:03 meeximum

If traefik will communicate with docker daemon as https://medium.com/lucjuggery/about-var-run-docker-sock-3bfd276e12fd pointed, the direct use of VCH is not feasible because VCH is a docker daemon listening to tcp but not unix socket. In this case, you can use DCH of VIC to run this kinds of containers. One similar example can be found at https://github.com/arslanabbasi/vic-product/blob/9bf9492814200bf1cbeb1e9a8229d86b665a0aec/tutorials/zalenium/README.md

wjun avatar Mar 21 '19 06:03 wjun

@wjun Thanx for your hint, but when I follow the tutorial (zalenium) I use the docker deployed inside a container that runs on a DCH? So the mapped docker.sock is the one from the photon container not from the DCH? But traefik needs access to the docker socket where other containers where deployed (DCH). For example my DCH I call it DCH001 is running on the appliance DCH000 is accessible from DCH via tcp://dch001:2376 so I can run docker remotely with (-H tcp://dch001:2376) now I deploy a container with traefik. In the traefik configuration I set the endpoint to tcp://176.16.0.1:2376 (because 176.16.0.1 is the gateway/host), but at start up it logs that theres no route found?

meeximum avatar Mar 26 '19 06:03 meeximum

@meeximum There are a few types of networks in VCH and its container VMs: https://vmware.github.io/vic-product/assets/files/html/1.5/vic_vsphere_admin/vch_networking.html In your case, if you want to access the VCH's 2376 port from each container VM, you need to create a container network which is layer 3 routable to the VCH's public network. Please note DCH is a container VM with docker daemon installed, it is created and managed by VCH vm, so DCH is different from VCH.

wjun avatar Mar 27 '19 03:03 wjun

@wjun thx for your help, this works perfect!!!

meeximum avatar Mar 28 '19 08:03 meeximum

@meeximum It looks this is a good example where we can migrate the dependency from /var/run/docker.sock to VCH's docker tcp port!

wjun avatar Mar 28 '19 08:03 wjun

@wjun thx for your help, this works perfect!!!

Do you happen to have info on how you setup traefik on VIC? Reading though the above, I am not able to determine how it was setup.

p928s1984 avatar Aug 24 '20 19:08 p928s1984

I'm still learning about containers, but this appears to be an issue I'm hitting and I'm pulling my hair out with this. Version 1.5.6 VIC/VCH. I have an NFS Share on my QNAP device - /Media. I can add /Media to my ESXi Host and to my VIC/VCH without issue. When I add it to my VCH however, it creates /Media/volumes. With lack of symlink support, how can I share /Media with my containers?

I read that NFS root could not be shared with VIC, so I moved data to /Media/Media and making /Media/Media the volume directory at mount, but the GUI just creates it's own /Media/Media/volumes directory no matter what I try. I do not want to move my media to a "volumes" folder just to make the product happy. Why can't I share existing data from a NAS to a container in this fashion??

Zanathoz avatar Mar 29 '21 16:03 Zanathoz