pytorch-faster-rcnn icon indicating copy to clipboard operation
pytorch-faster-rcnn copied to clipboard

add ENV variable to flag build with CUDA (instead of only "torch.cuda…

Open brandonfranzke opened this issue 7 years ago • 2 comments

I want to run the build scripts from within a docker container. I use nvidia-docker but the nvidia devices are not available during build by design. All the cuda libraries are available though.

Add an env flag "BUILD_WITH_CUDA" to force building with CUDA even if torch.cuda.is_available() returns False (such as during "docker build).

brandonfranzke avatar Sep 21 '17 05:09 brandonfranzke

Why Build it during docker build? Can't you build it when you log into the docker.

ruotianluo avatar Sep 21 '17 05:09 ruotianluo

This is part of an automated deployment workflow for an image segmenting webapp. The workflow builds the image on non-GPU instances. I do not have shell access to the containers after they are built. Once the build completes the orchestrator deploys the container to GPU servers.

Here is an issue reported to the nvidia-docker project with a bit more describing that the nvidia devices and volumes are not available during building though all libraries and files are:

https://github.com/NVIDIA/nvidia-docker/issues/225

I can also modify the request to use parsed args if that would be better. This env method is quick and dirty.

brandonfranzke avatar Sep 21 '17 06:09 brandonfranzke