analogsea icon indicating copy to clipboard operation
analogsea copied to clipboard

Add commands to snapshot and save a container

Open cboettig opened this issue 11 years ago • 9 comments

e.g.

  • docklet_commit() and docklet_push() call to docker commit and docker push respectively, or download the image locally with a docklet_save() that would do something like:
docker save -o imagename.tar <container-name/hash>

and then scp it to the local host.

cboettig avatar Oct 06 '14 21:10 cboettig

I agree, should be useful.

I am just wondering why you would download the image ever to a local drive ? I believe a strong point of docker and its ecosystem are the registries and the fact, that you never need to work with the (potentially large) image file itself. If you want to "archive the image", just make sure you have somewhere a docker registry under your control and "push" it there. It´s a bit like the fact that we do not work with tar-balls of source code anymore, because we have git and git-hub.

Docker "push" and "pull" are very efficient, as they not not need to move the base-image layers around, once they are present in a given registry. If you use "save" and "download" you always get the whole large image, even after a small change. That overloads the bandwidth and your local drive quickly.

behrica avatar Oct 07 '14 08:10 behrica

Does docker push need some set up so that it knows about your dockerhub account?

hadley avatar Oct 07 '14 14:10 hadley

@behrica completely agree that docklet_push() should be seen as the default mechanism. (@hadley docker push will prompt for credentials (user name, password, email) if you haven't run docker login (which saves credentials in ~/.dockercfg)).

I think there are two use cases for save, one practical and one more esoteric.

  • practical: cost for a private image. Docker Hub gives you 1 free private image, after which you can buy more. Of course one can host their own registry, but running a server also has costs and is not for everyone. Downloading the .tar is indeed far less efficient, but a simple way to store private images.
  • esoteric: from a reproducibility standpoint we ideally want access to the bitwise identical environment in which the code was produced. Future updates to (or loss of) the hosted image may cause us to loose this. It's awesome that docker lets you roll back history, but this is limited to AUFS layers. That's good in the case of sequential commits to manual changes in an image, but not perfect (e.g. rebuilding the image from the dockerfile results in differences).

cboettig avatar Oct 07 '14 15:10 cboettig

@cboettig is there a non-interactive way to set up your docker login credentials?

hadley avatar Oct 21 '14 15:10 hadley

@hadley hmm.. looks like you could just create a .dockercfg file in the user's home directory. The file is in JSON like so:

{"https://index.docker.io/v1/":{"authkey":"<key>==","email":"[email protected]"}}

If docker finds this file (and the user has read permission of the file) then you won't need to authenticate. You probably know that already; not sure how to generate the key without using sudo docker login. It looks like you can also authenticate, create repos etc using the API: https://docs.docker.com/reference/api/docker-io_api/#authorize-a-token-for-a-user-repository but those docs left me a bit vague on how authorizing a user token works...

cboettig avatar Oct 21 '14 15:10 cboettig

@cboettig we haven't addressed this yet have we? don't see any commits mentioning this issue

sckott avatar Aug 07 '15 20:08 sckott

Right, don't think so.

cboettig avatar Aug 10 '15 17:08 cboettig

@cboettig

and then scp it to the local host.

does that mean your machine?

sckott avatar Aug 10 '15 18:08 sckott

Right

cboettig avatar Aug 10 '15 18:08 cboettig