llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

LLM inference in C/C++

Results 1641 llama.cpp issues
Sort by recently updated
recently updated
newest added

to reproduce: ``` latest=`curl -qsI "https://github.com/ggml-org/llama.cpp/releases/latest"|grep location|cut -d " " -f 2` latest=`basename $latest|tr -d "\r"` #latest='b4000' # /dev/null "https://github.com/ggml-org/llama.cpp/releases/download/$latest/llama-$latest-bin-ubuntu-x64.zip" -O llama.zip echo Unzipping llama.cpp binaries unzip &>/dev/null llama.zip ```...

### Name and Version main version ### Operating systems _No response_ ### Which llama.cpp modules do you know to be affected? _No response_ ### Command line ```shell ``` ### Problem...

bug-unconfirmed

Tracker for upstream issue: https://github.com/docker/build-push-action/issues/1309 At the time of writing this issue, this has been failing for the past 3 days: https://github.com/ggml-org/llama.cpp/actions/workflows/docker.yml It was fixed in https://github.com/ggml-org/llama.cpp/pull/11472 but now the...

### Name and Version Version: llama-b4434-bin-ubuntu-x64 ### Operating systems Linux ### Which llama.cpp modules do you know to be affected? llama-cli ### Command line ```shell [erick@Xeon bin]$ ./llama-cli -m ~/Apps/Llama/EVA-LLaMA-3.33-70B-v0.1.Q4_K_M.gguf...

bug-unconfirmed

### Prerequisites - [x] I am running the latest code. Mention the version if possible as well. - [x] I carefully followed the [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.md). - [x] I searched using keywords...

enhancement

### Prerequisites - [x] I am running the latest code. Mention the version if possible as well. - [x] I carefully followed the [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.md). - [x] I searched using keywords...

enhancement

### Name and Version llama-cli --version version: 4713 (a4f011e8) built with MSVC 19.42.34436.0 for x64 ### Operating systems Windows ### Which llama.cpp modules do you know to be affected? llama-server...

bug

### Name and Version from master(22 jan) ### Operating systems Linux ### GGML backends CUDA ### Hardware 3060 on main pc +(3060+3060+1660ti/sup+1660ti/sup) on other pc ### Models DeepSeek-R1-Distill-Qwen-32B_q8 ### Problem...

bug-unconfirmed

### Name and Version build/bin/./llama-server --version version: 4384 (14b699ec) built with cc (Debian 14.2.0-11) 14.2.0 for x86_64-linux-gnu ### Operating systems Linux ### Which llama.cpp modules do you know to be...

bug-unconfirmed
stale

### Name and Version ‰ ./bin/llama-qwen2vl-cli --version version: 4391 (9ba399df) built with cc (Gentoo Hardened 14.2.1_p20241221 p6) 14.2.1 20241221 for x86_64-pc-linux-gnu ### Operating systems _No response_ ### Which llama.cpp modules...

bug-unconfirmed
stale