llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

add alpaca support into docker scripts

Open bernatvadell opened this issue 1 year ago • 2 comments

I have seen that support has been added in the master branch to the alpaca model, I have included the model in the docker scripts.

Now you can try like this:

docker run -v /llama/models:/models ghcr.io/ggerganov/llama.cpp:full --all-in-one "/models/" alpaca
docker run -it -v /llama/models:/models ghcr.io/ggerganov/llama.cpp:full --run -m /models/alpaca/ggml-alpaca-7b-q4.bin -p "Below is an instruction that describes a task Write a response that appropriately completes the request.
" -ins --top_k 10000 --temp 0.96 --repeat_penalty 1 -t 7

bernatvadell avatar Mar 20 '23 16:03 bernatvadell

resolve #326

bernatvadell avatar Mar 20 '23 17:03 bernatvadell

I think soon there will be (or there already are) bigger Alpaca models. Maybe add support for downloading those as well when they are available. Currently, it looks like it always downloads the 7B alpaca if requested

ggerganov avatar Mar 21 '23 16:03 ggerganov