ggllm.cpp icon indicating copy to clipboard operation
ggllm.cpp copied to clipboard

CMakeLists refinements for CUBLAS and phthread

Open luav opened this issue 2 years ago • 2 comments

  • CUDA_ARCHITECTURES set to all ("all" works instead of "native" on various platforms) instead of OFF when a) they are not defined explicitly and b) CUBLAS is used;
  • pthread linked properly on Linux;
  • CUDA-based std is enabled when CUBLAS is used;
  • cmake version fixed in README to reflect the CMakeLists.txt (3.17 is required for LLAMA_CUBLAS).

luav avatar Jul 11 '23 12:07 luav

The compilation is validated on Linux Ubuntu 20.04 x64 with GeForce MX150 and Windows 10 x64 with GeForce RTX 3050 TI for both CPU and CUDA GPU builds. The following build commands are used for GPU builds:

build$ cmake -DCMAKE_CUDA_ARCHITECTURES="all" -DLLAMA_F16C=0 -DLLAMA_FMA=0 -DLLAMA_AVX=0 -DLLAMA_AVX2=0 -DCMAKE_C_FLAGS="-march=native" -DLLAMA_CUBLAS=1 ..
build$ cmake --build . --config Release -j 4

luav avatar Jul 11 '23 12:07 luav

I'll need to look at that in greater detail. I'm not sure if switching CUBLAS auto-off is the right solution, people who want to compile it with cuda would just get a CPU-only binary which is probably more confusing than an error that CUDA was not found. It also means everything is being compiled wrongly which takes time and needs to be wiped once the actual problem is solved.

Regarding the fixes, I recall there were troubles with the architectures changes they had on llama.cpp. But I don't know the actual implications of the change.

I've compiled it fine on linux, windows and wsl with and without cuda support. I'm not sure which exact scenarios are improved now (and of that introduces issues we didn't have before)

cmp-nct avatar Jul 12 '23 12:07 cmp-nct