Testing issue: vtk-m (+cuda)
Steps to reproduce the failure(s) or link(s) to test output(s)
Singularity> spack find -dvl vtk-m+cuda
-- linux-ubuntu22.04-x86_64 / [email protected] ------------------------
wmxwujt [email protected]~64bitids+cuda+cuda_native+doubleprecision+examples~fpic~ipo~kokkos~logging+mpi+openmp+rendering~rocm+shared~tbb~testlib build_system=cmake build_type=Release cuda_arch=90 generator=make patches=64177d0
3gq4owh [email protected]~doc+ncurses+ownlibs build_system=generic build_type=Release
bqjvf6e [email protected]~gssapi~ldap~libidn2~librtmp~libssh~libssh2+nghttp2 build_system=autotools libs=shared,static tls=openssl
pz4ulyn [email protected] build_system=autotools
ms4sbsh [email protected]~docs+shared build_system=generic certs=mozilla
5bui7u7 ca-certificates-mozilla@2023-05-30 build_system=generic
fqkbmyz [email protected]+cpanm+opcode+open+shared+threads build_system=generic patches=714e4d1
y2dkgvn [email protected]+cxx~docs+stl build_system=autotools patches=26090f4,b231fcc
mvv5tmf [email protected]~debug~pic+shared build_system=generic
ppgdy35 [email protected] build_system=autotools
vn2i3ot [email protected] build_system=autotools libs=shared,static
xc5bypn [email protected] build_system=autotools
fl4uzzk [email protected] build_system=autotools patches=bbf97f1
xbzeeoo [email protected] build_system=autotools
7a4mbz3 [email protected]~symlinks+termlib abi=none build_system=autotools patches=7a351bc
nsjxqfd [email protected]+compat+new_strategies+opt+pic+shared build_system=autotools
x7rtpp3 [email protected]~allow-unsupported-compilers~dev build_system=generic
6cqtdnc [email protected] build_system=generic
kunl7zm [email protected]~guile build_system=generic
w2qwypv [email protected]~argobots~cuda+fortran~hwloc+hydra+libxml2+pci~rocm+romio~slurm~vci~verbs~wrapperrpath~xpmem build_system=autotools datatype-engine=auto device=ch4 netmod=ofi pmi=pmi
==> 1 installed package
Error message
Error message
==> Error: TestFailure: 1 test failed.
Command exited with status 8:
'/spack/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/cmake-3.27.9-3gq4owhidx3exqfpga3a465lrn25wvbd/bin/ctest' '--verbose'
1 error found in test log:
56
57 The following tests FAILED:
58 1 - SmokeTestInternal (Subprocess aborted)
59 Errors while running CTest
60 Output from these tests are in: /home/users/wspear/.spack/test/azy74y34mqyn6rsjvcyhdjaxwlcsxnvd/vtk-m-2.1.0-wmxwujt/smoke_test_build/Testing/Temporary/LastTest.log
61 Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.
>> 62 FAILED: VtkM::test: Command exited with status 8:
63 '/spack/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/cmake-3.27.9-3gq4owhidx3exqfpga3a465lrn25wvbd/bin/ctest' '--verbose'
64 File "/spack/bin/spack", line 52, in
65 sys.exit(main())
66 File "/spack/lib/spack/spack_installable/main.py", line 42, in main
67 sys.exit(spack.main.main(argv))
68 File "/spack/lib/spack/spack/main.py", line 1068, in main
Information on your system or the test runner
- Spack: 0.22.0.dev0
- Python: 3.10.12
- Platform: linux-ubuntu22.04-zen3
- Concretizer: clingo
Additional information
@kmorel @vicentebolea
The non-cuda build of vtk-m passes its test in this same environment
General information
- [X] I have reported the version of Spack/Python/Platform/Runner
- [X] I have run
spack maintainers <name-of-the-package>and @mentioned any maintainers - [X] I have uploaded any available logs
- [X] I have searched the issues of this repo and believe this is not a duplicate
Is there a platform we can have access to to replicate this issue?
Currently we're testing in a singularity image that provides the spack-installed software. I've pasted the contents of the test log file. We're running other cuda based tests built with this cuda version that are passing without the driver version error.
I'm looking into replication options. Please let me know if there's anything else diagnostic I can provide in the meantime.
Start testing: May 15 20:54 America
----------------------------------------------------------
1/1 Testing: SmokeTestInternal
1/1 Test: SmokeTestInternal
Command: "/home/users/wspear/.spack/test/azy74y34mqyn6rsjvcyhdjaxwlcsxnvd/vtk-m-2.1.0-wmxwujt/smoke_test_build/smoke_test"
Directory: /home/users/wspear/.spack/test/azy74y34mqyn6rsjvcyhdjaxwlcsxnvd/vtk-m-2.1.0-wmxwujt/smoke_test_build
"SmokeTestInternal" start time: May 15 20:54 America
Output:
----------------------------------------------------------
terminate called after throwing an instance of 'vtkm::cont::cuda::ErrorCuda'
what(): CUDA Error: CUDA driver version is insufficient for CUDA runtime version
Unchecked asynchronous error @ /tmp/root/spack-stage/spack-stage-vtk-m-2.1.0-wmxwujtmduiulmxcalaznuz4cz3pymzv/spack-src/vtkm/cont/cuda/internal/RuntimeDeviceConfigurationCuda.h:40
(Stack trace unavailable)
<end of output>
Test time = 0.19 sec
----------------------------------------------------------
Test Failed.
"SmokeTestInternal" end time: May 15 20:54 America
"SmokeTestInternal" time elapsed: 00:00:00
----------------------------------------------------------
End testing: May 15 20:54 America
If you have access to a system with a Tesla or newer gpu where you can run singularity, this is the image we used: https://oaciss.nic.uoregon.edu/e4s/images/24.05/e4s-cuda80-x86_64-24.05.sif (Running it with the -e and --nv flags should work). It's over 45gb in size so I understand if that's not an option for you.
This is the procedure that gets to the issue we're seeing over here:
wget https://oaciss.nic.uoregon.edu/e4s/images/24.05/e4s-cuda80-x86_64-24.05.sif
singularity run -e --nv e4s-cuda80-x86_64-24.05.sif
git clone https://github.com/E4S-Project/testsuite.git
cd testsuite/validation_tests/vtk-m-cuda
./run.sh
#to sanity check that cuda works in the container you can do
../cuda
./compile.sh
./run.sh
I think that the error is that vtk-m only builds with the specified cuda_arch whereas the other app might build for older cuda_archs as well in the same binaries. Can you share the nvidia-smi of the target system?
Here it is:
Singularity> nvidia-smi
Thu May 23 05:14:14 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA H100 PCIe On | 00000000:25:00.0 Off | 0 |
| N/A 45C P0 80W / 310W | 123MiB / 81559MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 5613 G /usr/libexec/Xorg 63MiB |
| 0 N/A N/A 9552 G /usr/bin/gnome-shell 59MiB |
+-----------------------------------------------------------------------------+
The nvidia driver seems to be enough: https://docs.nvidia.com/deploy/cuda-compatibility/index.html
Another thing could be that the process is picking the integrated GPU. is this a jetson type machine?