collect2: error: ld returned 1 exit status
When I run your code in linux with cuda and gcc, I meet with the problem which describes "/usr/bin/ld: cannot open output file ../bin/ep.B: No such file or directory". I dn not know the reason of it, so I need your help to assit me to deal with the problem.
Hello.
First of all, thank you for using our NPB version.
Second, please provide us with information about your hardware and software. This information includes the operating system, Linux kernel, CUDA, and GCC versions. We must also know your GPU model (for instance, an NVIDIA GTX 1070 GPU).
Third, you seem to be facing an issue with compiling the NPB. When you compile the EP benchmark (with the command make ep CLASS=B) in the NPB root directory ("CUDA"), does it generate a binary file in the "../bin" directory? Do you have the directory "bin" in the NPB root directory? We must have this directory to compile the benchmarks.
Thank you for you suggestions. I run the code in the server. The linux kernel is CentOs Linux 7.Cuda version is release 10.1, V10.1.105. And the GCC version is 7.5.0. I am the user in the server but I don't have administrator privileges. Does the code need writing operation in bin file?
I mentioned the "bin" directory in the NPB suite (so you don't need administrator privileges). For instance, you can follow this set of steps:
- Open the NPB directory with the command
cd NPB-GPU - Open the CUDA directory with the command
cd CUDA - Check if there is a directory called bin with the command
ls - If there is not a directory named bin, create one with the command
mkdir bin
Additionally, follow this set of steps and send us the output:
- Open the NPB directory with the command
cd NPB-GPU - Open the CUDA directory with the command
cd CUDA - Compile the EP program with class B executing the command
make ep CLASS=B - Please send us the output
- Execute the EP program with class B running the command
bin/ep.B - Please send us the output
Please inform your GPU model if these steps do not work.
Thank you for your suggestions again. I follow your suggestion and get the result.The following photo is the result's screenshot. Is it a correct output?

By the way, Could you describe the meaning of the parameters if you are conveninet?
First, your output is incorrect. Look at the line with the word "verification"; if the verification is UNSUCCESSFUL, the results are wrong. If the verification is SUCCESSFUL, the results are correct.
Second, it seems like you are using an old repository version. The execution should print the CPU and GPU model. Please, download the current version of the repository using the command git clone https://github.com/GMAP/NPB-GPU.git
Third, you must compile the source code using the compute capability of your GPU. For this purpose, you must edit the file make.def in the NPB:
- Open the NPB directory with the command
cd NPB-GPU - Open the CUDA directory with the command
cd CUDA - Open the config directory using the command
cd config - Open the make.def file with a text editor of your preference
- At line
COMPUTE_CAPABILITY = -gencode arch=compute_61,code=sm_61, replace61by the compute capability of your GPU - You can quickly check the compute capability of your GPU in Wikipedia
Fourth, if the compute capability of your GPU is inferior to 6, please add this source code in the EP.cpp code:
#if !defined(__CUDA_ARCH__) || __CUDA_ARCH__ >= 600
#else
static __inline__ __device__ double atomicAdd(double *address, double val) {
unsigned long long int* address_as_ull = (unsigned long long int*)address;
unsigned long long int old = *address_as_ull, assumed;
do {
assumed = old;
old = atomicCAS(address_as_ull, assumed, __double_as_longlong(val +__longlong_as_double(assumed)));
} while (assumed != old);
return __longlong_as_double(old);
}
#endif
By the way, Could you describe the meaning of the parameters if you are convenient?
Do you want to know the meaning of which NPB parameters?
The GPU of this server is a set of the Tesla V100. According to wiki page, I replace 61 in num 70 .But I find many errors in run new program in my server. The screenshot is the result.

-
In the file make.def, at the line where we define
NVCC = nvcc, you could modify it by explicitly setting the CUDA API (using your CUDA path in the server), for instance:NVCC = nvcc -I/usr/local/cuda/include -lm -L/usr/local/cuda/lib64 -
If modifying the make.def file does not work, please, show us the content of your npbparams.hpp file. This file is generated during the compilation time and includes the macros used in the NPB programs. The npbprams.hpp should have a content like this:
/* CLASS = B */
/*
* This file is generated automatically by the setparams utility.
* It sets the number of processors and the class_npb of the NPB
* in this directory. Do not modify it by hand.
*/
#define CLASS 'B'
#define M 30
#define CONVERTDOUBLE FALSE
#define COMPILETIME "25 Oct 2022"
#define NPBVERSION "4.1"
#define LIBVERSION "11.6.124"
#define COMPILERVERSION "11.6.124"
#define CPU_MODEL "AMD Ryzen 5 5600X 6-Core Processor"
#define CS1 "${NVCC} ${EXTRA_STUFF}"
#define CS2 "$(CC)"
#define CS3 "-lm "
#define CS4 "-I../common "
#define CS5 "-O3"
#define CS6 "-O3"
#define CS7 "randdp"
#define GPU_DEVICE 0
...
- Do the other NPB programs also present this problem? Can you successfully run other programs using CUDA in your server (like hello world or anything else)?