docker-registry icon indicating copy to clipboard operation
docker-registry copied to clipboard

Pushing a large image to docker registry too fast results in failure

Open calum-github opened this issue 10 years ago • 3 comments

I have a data only container that i have built. It contains a 4.7Gb disk image file, so my docker image is quite large with only a couple of layers.

When I push this image to my docker registry machine I get a failure anywhere from 400-800mb of the way into the push.

99804dc9a240: Pushing [====>                                              ] 393.3 MB/4.717 GB 4m23s
FATA[0166] Failed to upload layer: Put `http://10.100.135.101:5000/v1/images/99804dc9a24072337509dd988da16acd48baddeb70b6eac6e85c5321ea31cc59/layer: write tcp 153.107.39.10:80: broken pipe`

If I purposely cripple the machine that is doing the pushing and limit its network bandwidth to say 2Mb/sec.

I am then able to push the large image no problem

99804dc9a240: Pushing [==================================================>] 4.717 GB/4.717 GB

I think I am running into the issue with memory leaks in hipache as per https://github.com/hipache/hipache/issues/98

My docker image build machine and my docker registry machine are both on the same lan and are connected via gigabit and both have fast disks so it seems to me quite possible that my client machine doing the pushing is just giving the docker-registry too much data.

Is there anyway I can fix this? Would putting some kind of nginx proxy in front of the docker registry help?

calum-github avatar Jan 29 '15 05:01 calum-github

Can you provide your docker-registry logs (of a failed push)?

Thanks

dmp42 avatar Jan 29 '15 18:01 dmp42

Hi, what are you using to proxy the registry? I was having similar problems with large images, and had to enable chunked transfer encoding on the proxy (in my case, nginx).

See: https://github.com/docker/docker-registry/blob/master/ADVANCED.md#nginx

mattgiles avatar Feb 02 '15 04:02 mattgiles

I'm seeing this issue on Archlinux 2016.01.01 through a newish Linksys router and Comcast cable modem. I'm pushing from my local docker instance to an AWS Elastic Container Repository. Essentially push is consuming every ounce of available upload and bringing the rest of my network to its knees. I've attempted to limit the bandwidth use via tickle to no avail. It would be great if there was a command line switch to control the max amount of up or down stream bandwidth to use. All this to say that it might be the docker push/pull commands that need the attention.

bramswenson avatar Feb 17 '16 12:02 bramswenson