cugraph
cugraph copied to clipboard
Algorithm - Closeness Centrality
Implement Closeness Centrality
Hi, I'd like to fix to this issue. I'm a student from the univesity of bologna and for my thesis project I want to implement the GPU-acellerated version of this algorithm.
We would welcome an implementation of Closeness Centrality!
Happy to collaborate and help you understand anything in the cugraph framework. You might start with looking at our implementation of Betweenness Centrality, which will probably have some similarities with an implementation of Closeness Centrality.
Hi, i'm trying to work with the university cluster, and i'm having problem to build the project from the script ./build.sh This is the error:
Building for the architecture of the GPU in the system... CMake Error at /public.hpc/marco.galeri/miniforge3/envs/cugraph_dev/share/cmake-3.29/Modules/CMakeDetermineCompilerId.cmake:814 (message): Compiling the CUDA compiler identification source file "CMakeCUDACompilerId.cu" failed.
Compiler: /public.hpc/marco.galeri/miniforge3/envs/cugraph_dev/bin/nvcc
Build flags:
Id flags: --keep;--keep-dir;tmp -v
The output was:
1
#$ NVVM_BRANCH=nvvm
#$ SPACE=
#$ CUDART=cudart
#$ HERE=/usr/lib/nvidia-cuda-toolkit/bin
#$ THERE=/usr/lib/nvidia-cuda-toolkit/bin
#$ TARGET_SIZE=
#$ TARGET_DIR=
#$ TARGET_SIZE=64
#$ NVVMIR_LIBRARY_DIR=/usr/lib/nvidia-cuda-toolkit/libdevice
#$ PATH=/usr/lib/nvidia-cuda-toolkit/bin:/public.hpc/marco.galeri/miniforge3/envs/cugraph_dev/bin:/public.hpc/marco.galeri/miniforge3/condabin:/home/students/marco.galeri/.vscode-server/cli/servers/Stable-dc96b837cf6bb4af9cd736aa3af08cf8279f7685/server/bin/remote-cli:/usr/local/bin:/usr/bin:/bin:/usr/games
#$ LIBRARIES= -L/usr/lib/x86_64-linux-gnu/stubs -L/usr/lib/x86_64-linux-gnu
#$ rm tmp/a_dlink.reg.c
#$ "/public.hpc/marco.galeri/miniforge3/envs/cugraph_dev/bin"/x86_64-conda-linux-gnu-c++ -D__CUDA_ARCH__=520 -E -x c++ -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=2 -D__CUDACC_VER_BUILD__=152 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=2 -include "cuda_runtime.h" -m64 "CMakeCUDACompilerId.cu" -o "tmp/CMakeCUDACompilerId.cpp1.ii"
compilation terminated.
--error 0x1 --
Call Stack (most recent call first): /public.hpc/marco.galeri/miniforge3/envs/cugraph_dev/share/cmake-3.29/Modules/CMakeDetermineCompilerId.cmake:8 (CMAKE_DETERMINE_COMPILER_ID_BUILD) /public.hpc/marco.galeri/miniforge3/envs/cugraph_dev/share/cmake-3.29/Modules/CMakeDetermineCompilerId.cmake:53 (__determine_compiler_id_test) /public.hpc/marco.galeri/miniforge3/envs/cugraph_dev/share/cmake-3.29/Modules/CMakeDetermineCUDACompiler.cmake:131 (CMAKE_DETERMINE_COMPILER_ID) CMakeLists.txt:28 (project)
-- Configuring incomplete, errors occurred!
Can you provide output from nvidia-smi
and nvcc--version
?
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Feb_14_21:12:58_PST_2021
Cuda compilation tools, release 11.2, V11.2.152
Build cuda_11.2.r11.2/compiler.29618528_0
Tue May 21 08:39:13 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.182.03 Driver Version: 470.182.03 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:00:10.0 Off | N/A |
| 86% 81C P0 122W / 250W | 1MiB / 11019MiB | 99% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
I'm not prepared enough to fix this issue yet, i drop this project
Thanks for your interest. If you have further interest in the future please let us know.