llama.cpp
llama.cpp copied to clipboard
LLM inference in C/C++
hello, would it add MSVC build support as well?
Small fix to compile binaries properly on Linux: - defines `CLOCK_MONOTONIC` in `ggml.c` - Closes #54
Thannk you for creating such a great inference engine which has 10x speedup. Please add Unocode support to display other language properly.
The seed for the website example is included, but using the same parameters doesn't manage to reproduce the example output. Listing what requirements influense reproducability would help in verifying installs....
I want to integrate this into a slim chat system, so I think it would be nice to be able to have the app output only the text from the...
one can use `./main ... 2>dev/null` to suppress any diagnostic output Fixes https://github.com/ggerganov/llama.cpp/issues/5
The existing instructions won't work on the default osx 12.4 setup using the system python install. This fixes that.