singularity
singularity copied to clipboard
specify --nv as a default for apps / runscript if app requires it to run
Version of Singularity:
2.5.2
Expected behavior
When I define a runscript that calls a CUDA app executing the container as ./container.simg
will not have the --nv
option. I can run the container with singularity run --nv ./container.simg
.
Similar if I define an app that requires a GPU there is no point in launching the container without --nv
. Would be great to have a way to create the container in a way that --nv
can be added as a default of that container.
Actual behavior
./container.simg
will launch and the app will complain about having no GPUs.
Steps to reproduce behavior
Create a simple container that calls a CUDA app or even nvidia-smi
in the runscript
or in an apprun
.
Hey @mathiaswagner can you try the development-2x branch and see if it works for you? It has an environmental variable called SINGULARITY_NV
that you can use in place of the --nv
flag, and you can also set a value in the singularity.conf
file to turn --nv
on by default.
Thanks for pointing me to that. While not exactly what I was asking for I can work with that. It makes things a lot easier for me.
Hi,
I have a similar problem. I also want to run a runscript
via ./container.sif
. But in my case I want to ship the container as a demo to other people. Therefore it would be useful if the handling would be as simple as possible. Is there a way to integrate the automatic loading of the cuda environment into the container?
Hello,
This is a templated response that is being sent out to all open issues. We are working hard on 'rebuilding' the Singularity community, and a major task on the agenda is finding out what issues are still outstanding.
Please consider the following:
- Is this issue a duplicate, or has it been fixed/implemented since being added?
- Is the issue still relevant to the current state of Singularity's functionality?
- Would you like to continue discussing this issue or feature request?
Thanks, Carter
This issue has been automatically marked as stale because it has not had activity in over 60 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.
@mathiaswagner Still getting this problem ? Have this been solved already ? If yes, what work around have you followed and applied ?
We're looking into the issue carefully, soon will bring to community and discuss ways to better solve as well address this. Thankyou for keeping the interest in the subject.
Pending issues from the old repo are copied to the new repo (https://github.com/apptainer/apptainer/issues/1388) and cleaned from the old retired repo.