llama-cpp-python icon indicating copy to clipboard operation
llama-cpp-python copied to clipboard

Python bindings for llama.cpp

Results 424 llama-cpp-python issues
Sort by recently updated
recently updated
newest added

Just a quick heads-up that llama.cpp will be moved from [ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp) to [ggml-org/llama.cpp](https://github.com/ggml-org/) Source: https://github.com/ggerganov/llama.cpp/discussions/11801

Alright.......so I've used these commands to install cuda llama-cpp-python on my windows 11 machine. ``` set CMAKE_ARGS="-DGGML_CUDA=on" ``` ``` pip install llama-cpp-python --no-cache-dir ``` And well, once I run those...

I have installed CUDA in version 12.5, which is detected by CMake and is visible in the terminal. The CUDA_PATH is also correctly set. During trying to install, I see...

Could you kindly trigger the CUDA build for v0.3.2 and publish them to GitHub [Releases](https://github.com/abetlen/llama-cpp-python/releases)? Thank you 🙏!

For LLama.cpp, distributed inference work great. I am using it right now into my program. Now, if not implemented, it would be cool to add it to llama-cpp-python. Right now,...

no matter what i try, i keep getting this issue on WSL: pip install --upgrade --no-cache-dir --force-reinstall git+https://github.com/abetlen/llama-cpp-python Defaulting to user installation because normal site-packages is not writeable Collecting git+https://github.com/abetlen/llama-cpp-python...

I'm trying to install llama-cpp-python with vulkan support. I'm running a Windows 11 machine with NVIDIA RTX 4090, installed the latest vulkan SDK from https://vulkan.lunarg.com/sdk/home such that it's at C:/VulkanSDK/1.4.328.1...

This ensures that correct **requested** cuda-toolkit is installed via mamba during the cuda build process. Fixes : #2089

# Prerequisites Please answer the following questions for yourself before submitting an issue. - [x] I am running the latest code. Development is very rapid so there are no tagged...