llm
llm copied to clipboard
Add basic alpaca REPL mode
A working REPL prototype with the alpaca prompt. Not sure if we want the alpaca there :thinking:
Thanks a lot! :smile:
I took the liberty to cleanup things a bit since we've been merging some breaking changes.
I think having the alpaca prompt is great, but we should make this configurable, so I changed a bit how the command works:
Instead of ignoring the prompt, we take any prompt string using the usual args (-p, -f). And replace the string $PROMPT inside the given prompt, with whatever is read from the line.
Then, I added an examples/alpaca_prompt.txt with the following:
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
$PROMPT
### Response:
I also made it so that the original prompt isn't spit back to you, and instead a little spinner is shown while the prompt is being parsed.
Here's a demo

Sorry for the noob question, empty prompt (hitting return at >>) returns different casual answers, so is there randomness in the process?
Yup - at each step of the LLM, it produces a list of probabilities, and it will sample from that list to produce different outputs. If you'd like to fix the results, you can use a specific --seed (it will always deterministically sample according to that seed (with the caveat that it's only for your machine)) or --top-k 1 (which will always pick the most probable word).