llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Added script to invoke alpaca model

Open nullhook opened this issue 1 year ago • 1 comments

Resolves https://github.com/ggerganov/llama.cpp/issues/240

WIP

This needs to be able to:

  1. Configure custom model folders.
  2. Adjust settings for running variants of the Alpaca model and make corresponding changes in the C++ side.
  3. Provide the ability to configure parameters.
  4. Possibly hide the extraneous template it returns.
  5. Ability to run interactive mode

nullhook avatar Mar 18 '23 21:03 nullhook

I made some changes to on master already to support the interactive mode where we inject ### Instruction: prefix and ### Response: suffix in the input. I don't know if this is really needed, but this is what the alpaca.cpp guys are using, so I added it.

I think our original --interactive mode actually works better, but don't have too much evidence.

If you adapt this python script to the new changes and update the README with usage instruction - we can merge.

ggerganov avatar Mar 19 '23 18:03 ggerganov

I think this got a little outdated. If this is still of interest, please move the script to the examples folder and add some brief usage instructions to the README file and reopen

ggerganov avatar Apr 13 '23 12:04 ggerganov