llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Compile bug: ggml-cuda requires the language dialect "CUDA17"

Open hh2712 opened this issue 2 weeks ago • 1 comments

Git commit

$git rev-parse HEAD 855cd0734aca26c86cc23d94aefd34f934464ac9

Operating systems

Linux

GGML backends

CUDA

Problem description & steps to reproduce

I'm trying to install llama.cpp with cuda configuration but getting error with cmake compilation requiring language dialect "CUDA17". I have updated my cuda to the latest version cuda-12.8, btw.

First Bad Commit

No response

Compile command

cmake -B build -DGGML_CUDA=ON

Relevant log output

-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- Including CPU backend
-- x86 detected
-- Adding CPU backend variant ggml-cpu: -march=native
-- CUDA Toolkit found
-- Using CUDA architectures: 52;61;70;75
-- CUDA host compiler is GNU 8.4.0

-- Including CUDA backend

-- Configuring done (0.4s)
CMake Error in ggml/src/ggml-cuda/CMakeLists.txt:
  Target "ggml-cuda" requires the language dialect "CUDA17" (with compiler
  extensions).  But the current compiler "NVIDIA" does not support this, or
  CMake does not know the flags to enable it.


-- Generating done (0.7s)
CMake Generate step failed.  Build files cannot be regenerated correctly.

hh2712 avatar Feb 07 '25 06:02 hh2712