llama.cpp
llama.cpp copied to clipboard
add alpaca support into docker scripts
I have seen that support has been added in the master branch to the alpaca model, I have included the model in the docker scripts.
Now you can try like this:
docker run -v /llama/models:/models ghcr.io/ggerganov/llama.cpp:full --all-in-one "/models/" alpaca
docker run -it -v /llama/models:/models ghcr.io/ggerganov/llama.cpp:full --run -m /models/alpaca/ggml-alpaca-7b-q4.bin -p "Below is an instruction that describes a task Write a response that appropriately completes the request.
" -ins --top_k 10000 --temp 0.96 --repeat_penalty 1 -t 7
resolve #326
I think soon there will be (or there already are) bigger Alpaca models. Maybe add support for downloading those as well when they are available. Currently, it looks like it always downloads the 7B alpaca if requested