dsub icon indicating copy to clipboard operation
dsub copied to clipboard

Resize google-pipelines-worker running vm instance secondary persistant disk

Open metanav opened this issue 6 years ago • 1 comments

I have submitted a long running job with 300GB disk size. The running vm instance needs more than 300GB. I still have time to resize the disk. I followed the google cloud documention : https://cloud.google.com/compute/docs/disks/add-persistent-disk to resize the disk. I am stuck at the last step where I have to run "resize2fs /dev/sdb". I started the job with dsub --ssh so I am able to ssh it but I am not able to find /dev/sdb. The "df -h" and lsblk command shows it but it is not accessible from the logged in console. I guess ssh is running inside a docker container. How can I access the host filesystem?

metanav avatar Nov 02 '19 11:11 metanav

Hi @metanav !

I was not able to get this to work either, and I do think it has to do with the way that the docker containers are set up on the VM. While certainly very handy in some situations, it seems a bit in opposition to the notion of batch computing to change the runtime environment of a running task. The intent of the SSH feature is really for debugging.

That said - it would be a very nice feature of the Pipelines API if it could auto-detect a disk filling up and resize it. One could create a pipeline with a minimum / maximum disk size and the infrastructure could auto-extend up to the maximum.

I would suggest posting a question or request to the forum:

https://groups.google.com/forum/#!forum/gcp-life-sciences-discuss

-Matt

mbookman avatar Nov 11 '19 22:11 mbookman