nanobox
nanobox copied to clipboard
add generic push/pull functionality to upload/download files to/from production
From @tylerflint on December 15, 2016 15:12
There are times when it is necessary to upload a large file into a production component. For instance, if it is needed to seed a database with a very large database dump, the local tunnel will timeout. In this case, uploading the large file into that component, then console'ing in and importing would do the trick.
Could potentially look like this:
nanobox push data.db /path/to/local/file /path/to/remote
And pull
would be the same, just in reverse.
This implementation might span multiple components so we will need to figure out how or where to do this.
Copied from original issue: nanobox-io/nanobox-docker-build#101
push
could maybe start with the component root path in /data/var
if no absolute file path is provided.
Thus nanobox push data.storage foo.txt
would automatically store in /data/var/db/unfs/foo.txt
within the data.storage
component.
It would also be useful if this could be applied to containers in a more general sense. An example for the mysql
component:
nanobox pull data.mysql
This could for example automatically create a dump of the gonano
database, transfer it to local and import it into the local gonano
database. A push
could do the same in reversed order. Anything else would not make much sense for a DB container and this workflow would be a lot easier than manually performing dumps/imports.
I could see wanting the dump to be stored in (or retrieved from) a file, rather than the local DB component, but that simply requires a filename argument. Without it, the transfer would naturally default to the local component (with the special keyword dry-run
for that one?).