Bennison J
Results
2
comments of
Bennison J
trafficstars
> @arthurwolf You can try building using the following, it worked for me. > > `CUDACXX=/usr/local/cuda-12/bin/nvcc CMAKE_ARGS="-DLLAMA_CUBLAS=on -DCMAKE_CUDA_ARCHITECTURES=native" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir --force-reinstall --upgrade` Thanks, it worked
Is it available at Ollama now?