docker-osx-dev
docker-osx-dev copied to clipboard
Sync constantly running out of space
Steps to reproduce:
-
Create a machine with following resources:
docker-machine create --driver virtualbox --virtualbox-disk-size 25000 --virtualbox-cpu-count 2 --virtualbox-memory 2048 testmachine
-
Start a machine
-
Create
docker-compose.yml
-
Run Compose
-
Run
docker-osx-dev
on a project with 10+ GB of data to sync
What happens: sync runs for a minute and fails with:
2016-02-10 16:41:51 [INFO] rsync: write failed on "/Users/ain/projects/testmachine/shared/docker/assets/global_images/image/file/582/xofoiapdodpoda132.png": No space left on device (28)
2016-02-10 16:41:51 [INFO] rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1]
2016-02-10 16:41:51 [INFO] rsync: [sender] write error: Broken pipe (32)
2016-02-10 16:41:56 [INFO] Initial sync done
What should happen: sync should complete successfully.
If you run docker ps -a
and docker images
, you'll get a list of all the containers and images on your system, all of which are also stored in VirtualBox and take up a lot of room. Between those and the sync data, it's certainly possible you'll be out of space. The only workaround I can think of for now is to either clean up the files or increase the VirtualBox disk size.
I think the problem is that the copy function works with tmpfs
(because of tar?) and therefore throws up when the space there gets consumed.
To back this speculation up, take a look:
tmpfs 1.8G 1.8G 0 100% /
tmpfs 1001.3M 956.0K 1000.3M 0% /dev/shm
/dev/sda1 23.0G 4.4G 17.4G 20% /mnt/sda1
cgroup 1001.3M 0 1001.3M 0% /sys/fs/cgroup
/dev/sda1 23.0G 4.4G 17.4G 20% /mnt/sda1/var/lib/docker/aufs
none 23.0G 4.4G 17.4G 20% /mnt/sda1/var/lib/docker/aufs/mnt/0dde036b38ed818d3e439c21290fe6153d2f0bcba5888f003a721448ca8cfdb8
shm 64.0M 0 64.0M 0% /mnt/sda1/var/lib/docker/containers/56cd6d371e51fe2ead65646df783c80c90e2375d7ecf872a1cbea2301a597335/shm
none 23.0G 4.4G 17.4G 20% /mnt/sda1/var/lib/docker/aufs/mnt/604e0b2425832218c3178441f90d3c926ca722c54e47e5e7f4b1a51a1afdbcc0
shm 64.0M 4.0K 64.0M 0% /mnt/sda1/var/lib/docker/containers/2b4bd577c0c5f53c5e5e7975fc41ef4156eefd2f990e222be590cc493c3fc84a/shm
none 23.0G 4.4G 17.4G 20% /mnt/sda1/var/lib/docker/aufs/mnt/6a89248377d8a1527d9a8410444882b39e2dd881a22602cfeeceb809748f4f3c
shm 64.0M 0 64.0M 0% /mnt/sda1/var/lib/docker/containers/51c4fa4fdc8e637f08387d5f90ed87657cc571f8a552d47e53ad795e7204aa68/shm
Hm, that could be. Not sure how to change that with tar
either. Did you not have the same issue with this same folder before tar
was introduced and we were using rsync
for the initial sync?
Can't tell. I kept these gigabytes of assets inside another container earlier, didn't have a problem.
We should track down the commit before tar merge and test against that one to clarify.
Reproduced. It's a tar
problem, was running without tar last week, all fine, upgraded on Friday and today after rebooting the whole machine, I get this:
2016-02-29 09:45:43 [INFO] Initial sync using tar for /Users/ain/projects/…/frontend
tar: write error: No space left on device
exit status 1
and
$ dockersize
Filesystem Size Used Available Use% Mounted on
tmpfs 1.8G 1.8G 72.0K 100% /
tmpfs 1001.3M 0 1001.3M 0% /dev/shm
/dev/sda1 27.8G 19.8G 6.5G 75% /mnt/sda1
cgroup 1001.3M 0 1001.3M 0% /sys/fs/cgroup
/dev/sda1 27.8G 19.8G 6.5G 75% /mnt/sda1/var/lib/docker/aufs
The thing is, tar works with tmpfs
which on my Machine instance is 2048. That 72K there is what remained. We can't have tar working in memory here.
I was able to circumvent the problem by applying a better .dockerignore
.
I was having the same problem. Constantly getting:
2016-03-16 00:46:09 [INFO] Initial sync using tar for /Users/... tar: write error: No space left on device exit status 1
while running docker-osx-dev
even after recreating the docker machine instance.
I was finally able to finish the initial sync and move on to docker-compose after upsizing the memory available to the Docker machine instance from 3GB to 4GB.
I suspect that in fact, tar works with tmpfs
and it is limited by memory allocated to the docker machine.
Yup, that is really the case here.
I'm getting this same problem.. but from what's written above, still not sure how to fix. How do I "increase the memory available to the docker machine instance"? Or "apply a better .dockerignore
"?
I was able to find an answer that worked for me..
boot2docker stop
VBoxManage modifyvm boot2docker-vm --memory 3500
boot2docker start
See http://stackoverflow.com/questions/24422123/change-boot2docker-memory-assignment