nvidia-docker-compose icon indicating copy to clipboard operation
nvidia-docker-compose copied to clipboard

Allow fine control over GPU device placement

Open eywalker opened this issue 7 years ago • 2 comments

Currently, nvidia-docker-compose adds and mounts all NVIDIA volumes and devices to all services found in the docker-compose file. It would make sense to enable the user to specify which service(s) will be bound to NVIDIA. Extend the nvidia-docker-compose interface to take extra arguments to specify targets at the launch time.

eywalker avatar Sep 22 '16 18:09 eywalker

With version v0.4.0, you can now specify which GPU device should be included by specifying devices explicitly in docker-compose.yml. However, if you don't specify any devices, then all GPU devices will be made available to the container as was previously done before. Although I think this is an acceptable default, there is not a way to have no GPU device assigned to a service. Usually this is not an issue, but if you want to ensure that the service only uses CPU (i.e. TensorFlow defaults to using GPU if available), then not making the GPU device available to the service is the cleanest solution.

eywalker avatar Nov 10 '16 21:11 eywalker

This is now supported via the use of special config keyword enable_cuda=true/false as discussed here #25

eywalker avatar Apr 09 '18 17:04 eywalker