llm icon indicating copy to clipboard operation
llm copied to clipboard

Add basic alpaca REPL mode

Open darthdeus opened this issue 2 years ago • 1 comments

A working REPL prototype with the alpaca prompt. Not sure if we want the alpaca there :thinking:

darthdeus avatar Mar 17 '23 00:03 darthdeus

Thanks a lot! :smile:

I took the liberty to cleanup things a bit since we've been merging some breaking changes.

I think having the alpaca prompt is great, but we should make this configurable, so I changed a bit how the command works:

Instead of ignoring the prompt, we take any prompt string using the usual args (-p, -f). And replace the string $PROMPT inside the given prompt, with whatever is read from the line.

Then, I added an examples/alpaca_prompt.txt with the following:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:

$PROMPT

### Response:

I also made it so that the original prompt isn't spit back to you, and instead a little spinner is shown while the prompt is being parsed.

Here's a demo alpaca_repl_screencap

setzer22 avatar Mar 18 '23 10:03 setzer22

Sorry for the noob question, empty prompt (hitting return at >>) returns different casual answers, so is there randomness in the process?

RCasatta avatar Mar 22 '23 08:03 RCasatta

Yup - at each step of the LLM, it produces a list of probabilities, and it will sample from that list to produce different outputs. If you'd like to fix the results, you can use a specific --seed (it will always deterministically sample according to that seed (with the caveat that it's only for your machine)) or --top-k 1 (which will always pick the most probable word).

philpax avatar Mar 22 '23 08:03 philpax