llm icon indicating copy to clipboard operation
llm copied to clipboard

how should i build a web interface?

Open ralyodio opened this issue 2 years ago • 3 comments

Is there anyway to provide a history of prompts (like chat-gpt4) and tell the llm-cli to return json?

ralyodio avatar May 10 '23 12:05 ralyodio

Your best bet is to build your own server based around llm; llm-cli is basically just a demo application for llm.

You can see how this might be done in the now-closed PR #37.

philpax avatar May 10 '23 13:05 philpax

it looks like that PR was closed and llm-http isn't actually a thing yet. I'm not good with rust, but I'm good with node. I can call out to llm-cli but was just curious if there's another way.

Do you know how I could track a history of prompts with llm-cli?

ralyodio avatar May 10 '23 14:05 ralyodio

Have you tried using https://github.com/Atome-FE/llama-node? I think it has bindings to this library

This is a nodejs library for inferencing llama, rwkv or llama derived models. It was built on top of llama-rs, llama.cpp and rwkv.cpp. It uses napi-rs for channel messages between node.js and llama thread.

(this repo used to be called llama-rs until recently)

ducaale avatar May 10 '23 14:05 ducaale

@ralyodio I'm going to close this Issue, but please feel free to reopen it or open a new one if you have more questions.

danforbes avatar May 19 '23 14:05 danforbes