PandaGPT icon indicating copy to clipboard operation
PandaGPT copied to clipboard

[Question] PandaGPT with llama.cpp

Open ningshanwutuobang opened this issue 1 year ago • 0 comments

I have tried to use llama.cpp for PandaGPT in panda_gpt_llama_cpp. The script get poor performance. Is there any thing wrong for the procedure? Or is it just the limit of the model or q4_1 precision?

The following are my steps.

  1. Obtain vicuna v0. Use [email protected] to merge llama-13b-hf and vicuna-13b-delta-v0.
  2. Merge lora weights to vicuna v0.
  3. Convert it to ggml format and quantize it to q4_1. The result is ggml-pandagpt-vicuna-merge.
  4. The script is located in panda_gpt_llama_cpp.

The model seems to recognize <Img>...</Img> labels.

ningshanwutuobang avatar Jul 01 '23 14:07 ningshanwutuobang