i M@N

Results 2 comments of i M@N

yes, good idea.

You'll need [llama.cpp](https://github.com/ggerganov/llama.cpp) compil binaries of llama.cpp to get llava-cli : ``` git clone https://github.com/ggerganov/llama.cpp cd llama.cpp export LLAMA_CUDA=1 # only if for NViDiA CUDA make -j$(nproc) ``` Launch llava-cli...