Andrew Morgan
Andrew Morgan
Did you ever solve this?
Forgot to tag the maintainers @bvanessen @scothalverson
Also ran spack -d install and got this that I think is where the problem is: ==> [2024-04-22-20:59:43.284050] Collecting libraries for cuda ==> [2024-04-22-20:59:43.284170] Find (not recursive): /lustre/home/br-amorgan/spack/opt/spack/linux-rhel8-zen3/gcc-11.2.0/cuda-11.8.0-a6mrb5lipkvpg2wyynrlc3kobbgcopsl/lib64 ['libcudart.so'] And...
Yeah, I'm trying to install NVSHMEM using NVHPC as the compiler. I didn't know there was a difference between CUDA and CUDA in NVHPC. Should I set CUDA_HOME to standalone...
I get the same error with CUDA and CUDA_HOME set to my system version of CUDA. ``` 33 /lustre/home/br-amorgan/spack/lib/spack/env/gcc/g++ -O2 -I /lustre/software/x86/tools/nvidia/hpc_sdk/Linux_x86_64/24.1/cuda/inc lude -I ../include -I ../src -I /lustre/software/x86/tools/nvidia/hpc_sdk/Linux_x86_64/24.1/cuda/include -I/lustre/home/br-amor...
Out of curiousity have your team been able to build with nvhpc @scothalverson
I have a nvhpc (standalone) on the cluster I am working on so I'll give that a try aswell
Wondering the same thing
@gushengbo did you find any solution?
Please make a new issue if you have a new question