for-azure icon indicating copy to clipboard operation
for-azure copied to clipboard

No way to use a VM's attached VHD

Open gvilarino opened this issue 8 years ago • 2 comments

I use D2_v2 VMs for my swarm. They offer a 100GB VHD, which I consider enough for my use case, for the time being. Still, this disk isn't being used at all by docker, instead it uses the system mount (30GB) which ends up running out of space very quickly.

Expected behavior

Docker to use the full storage extent of the VM I'm paying for (i.e: store images in the attached VHD)

Actual behavior

Docker uses the system mount (30GB) and it runs out of space pretty quickly, making it impossible for me to run services because newer images never get downloaded. Also, even if I buy VMs with larger disks, it'd make no difference since the system mount is always 30GB.

Information

swarm-manager000000:~$ docker-diagnose 
OK hostname=swarm-manager000000 session=1496436851-iVO4EGwLG7PNWob3jI5qOnPF16rW0Les
OK hostname=swarm-manager000001 session=1496436851-iVO4EGwLG7PNWob3jI5qOnPF16rW0Les
OK hostname=swarm-manager000002 session=1496436851-iVO4EGwLG7PNWob3jI5qOnPF16rW0Les
OK hostname=swarm-worker000000 session=1496436851-iVO4EGwLG7PNWob3jI5qOnPF16rW0Les
OK hostname=swarm-worker000001 session=1496436851-iVO4EGwLG7PNWob3jI5qOnPF16rW0Les
OK hostname=swarm-worker000002 session=1496436851-iVO4EGwLG7PNWob3jI5qOnPF16rW0Les
OK hostname=swarm-worker000003 session=1496436851-iVO4EGwLG7PNWob3jI5qOnPF16rW0Les
OK hostname=swarm-worker000004 session=1496436851-iVO4EGwLG7PNWob3jI5qOnPF16rW0Les
Done requesting diagnostics.
Your diagnostics session ID is 1496436851-iVO4EGwLG7PNWob3jI5qOnPF16rW0Les
Please provide this session ID to the maintainer debugging your issue.

This could be solved by either making it the default setting for the docker daemon. Even though this would make all images and containers be lost when the VM is reset, all services and stacks would be re-scheduled to other nodes so it shouldn't have much impact in existing applications, and would allow users to leverage their swarms better. Also, it'd be good to count with the space advertised in the VM size website.

Steps to reproduce the behavior

  1. Create any swarm with D2_v2 size VMs
  2. ssh into any node and pull 30GB+ worth of images
  3. See yourself running out of space even though you're supposed to have 100GB by doing: 3.1. cd / 3.2. sudo du -d 1 -h -c

gvilarino avatar Jun 02 '17 21:06 gvilarino

same issue from Docker forum https://forums.docker.com/t/docker-for-azure-nodes-out-of-space/34312/2

anyone knows when is the upcoming release that would fix this issue? thanks

yclliu avatar Jul 14 '17 22:07 yclliu

Will be handled as part of https://github.com/docker/for-azure/issues/29

ddebroy avatar Aug 30 '17 18:08 ddebroy