llm
llm copied to clipboard
llm web command - launches a web server
A command that launches a web server to allow you to both browse your logs and interact with APIs through your browser.
I'm tempted to add Datasette as a dependency for this.
Maybe this becomes a two-way thing: it could be a Datasette plugin (which adds datasette llm as a command, and implements the full UI in Datasette) but it could also be a thing where if you install llm and run llm web you get a Datasette interface running that plugin.
Prototype:
import asyncio
from datasette import hookimpl, Response
import openai
CHAT = """
<!DOCTYPE html>
<html>
<head>
<title>WebSocket Client</title>
</head>
<body>
<h1>WebSocket Client</h1>
<textarea id="message" rows="4" cols="50"></textarea><br>
<button onclick="sendMessage()">Send Message</button>
<div id="log" style="margin-top: 1em; white-space: pre-wrap;"></div>
<script>
const ws = new WebSocket(`ws://${location.host}/ws`);
ws.onmessage = function(event) {
console.log(event);
const log = document.getElementById('log');
log.textContent += event.data;
};
function sendMessage() {
const message = document.getElementById('message').value;
console.log({message, ws});
ws.send(message);
}
</script>
</body>
</html>
""".strip()
async def websocket_application(scope, receive, send):
from .cli import get_openai_api_key
openai.api_key = get_openai_api_key()
if scope["type"] != "websocket":
return Response.text("ws only", status=400)
while True:
event = await receive()
if event["type"] == "websocket.connect":
await send({"type": "websocket.accept"})
elif event["type"] == "websocket.receive":
message = event["text"]
async for chunk in await openai.ChatCompletion.acreate(
model="gpt-3.5-turbo",
messages=[{
"role": "user",
"content": message,
}],
stream=True,
):
content = chunk["choices"][0].get("delta", {}).get("content")
if content is not None:
await send({"type": "websocket.send", "text": content})
elif event["type"] == "websocket.disconnect":
break
def chat():
return Response.html(CHAT)
@hookimpl
def register_routes():
return [
(r"^/ws$", websocket_application),
(r"^/chat$", chat),
]
I put that in llm/plugin.py and then put this in setup.py:
entry_points={
"datasette": ["llm = llm.plugin"],
"console_scripts": ["llm=llm.cli:cli"],
},
And this in cli.py:
@cli.command()
def web():
from datasette.app import Datasette
import uvicorn
path = get_log_db_path()
if not os.path.exists(path):
sqlite_utils.Database(path).vacuum()
ds = Datasette(
[path],
metadata={
"databases": {
"log": {
"tables": {
"log": {
"sort_desc": "rowid",
}
}
}
}
},
)
uvicorn.run(ds.app(), host="0.0.0.0", port=8302)
Mucked around with HTML and CSS a bit and got to this prototype:
<div class="chat-container">
<div class="chat-bubble one">
<div>
<p>Hello, how are you?</p>
</div>
<img class="avatar" src="https://placekitten.com/40/40" alt="Person A Avatar">
</div>
<div class="chat-bubble two">
<div>
<p>I'm good, thanks! And you?</p>
</div>
<img class="avatar" src="https://placekitten.com/40/40" alt="Person B Avatar">
</div>
<div class="chat-bubble one">
<div>
<p>Lorem ipsum dolor sit amet consectetur, adipisicing elit. Distinctio similique quos ratione omnis impedit, est mollitia amet</p><p>aspernatur inventore consectetur, autem dolorum at nemo! Voluptas modi eveniet culpa nobis id?
</div>
<img class="avatar" src="https://placekitten.com/40/40" alt="Person A Avatar">
</div>
<div class="chat-bubble two">
<div id="animatedText">
<p>Lorem ipsum dolor sit amet consectetur, adipisicing elit. Distinctio similique quos ratione omnis impedit, est mollitia amet</p><p>aspernatur inventore consectetur, autem dolorum at nemo! Voluptas modi eveniet culpa nobis id?
</div>
<img class="avatar" src="https://placekitten.com/40/40" alt="Person B Avatar">
</div>
</div>
<style>
.chat-container {
display: flex;
flex-direction: column;
align-items: flex-start;
font-family: Helvetica, sans-serif;
line-height: 1.35;
color: rgba(0, 0, 0, 0.8);
max-width: 600px;
}
.chat-bubble {
border-radius: 10px;
padding: 10px;
margin: 10px;
width: 85%;
border: 1px solid #ccc;
background-color: #e6e5ff;
display: flex;
align-items: start;
}
.chat-bubble.one {
border-color: #b9b7f2;
}
.chat-bubble.two {
/* darker darker green */
border-color: #98d798;
}
.chat-bubble.one img.avatar {
order: -1;
margin-right: 10px;
}
.chat-bubble.two {
background-color: #ccffcc;
align-self: flex-end;
justify-content: space-between;
}
.chat-bubble.two img.avatar {
order: 1;
margin-left: 10px;
}
.chat-bubble p {
margin-top: 0;
}
.chat-bubble p:last-of-type {
margin-bottom: 0;
}
</style>
<script>
var text = "Lorem ipsum dolor sit amet consectetur, adipisicing elit. Distinctio similique quos ratione omnis impedit, est mollitia amet aspernatur inventore consectetur, autem dolorum at nemo! Voluptas modi eveniet culpa nobis id?";
var words = text.split(" ");
var container = document.getElementById("animatedText");
container.innerHTML = "";
function addWord(index) {
if (index < words.length) {
container.innerHTML += words[index] + " ";
setTimeout(function() {
addWord(index + 1);
}, 50);
}
}
addWord(0);
</script>
Added a submit form:
<div class="chat-container">
<div class="chat-bubble one">
<div>
<p>Hello, how are you?</p>
</div>
<img class="avatar" src="https://placekitten.com/40/40" alt="Person A Avatar">
</div>
<div class="chat-bubble two">
<div>
<p>I'm good, thanks! And you?</p>
</div>
<img class="avatar" src="https://placekitten.com/40/40" alt="Person B Avatar">
</div>
<div class="chat-bubble one">
<div>
<p>Lorem ipsum dolor sit amet consectetur, adipisicing elit. Distinctio similique quos ratione omnis impedit, est mollitia amet</p><p>aspernatur inventore consectetur, autem dolorum at nemo! Voluptas modi eveniet culpa nobis id?
</div>
<img class="avatar" src="https://placekitten.com/40/40" alt="Person A Avatar">
</div>
<div class="chat-bubble two">
<div id="animatedText">
<p>Lorem ipsum dolor sit amet consectetur, adipisicing elit. Distinctio similique quos ratione omnis impedit, est mollitia amet</p><p>aspernatur inventore consectetur, autem dolorum at nemo! Voluptas modi eveniet culpa nobis id?
</div>
<img class="avatar" src="https://placekitten.com/40/40" alt="Person B Avatar">
</div>
<div class="chat-bubble one">
<div class="contains-textarea">
<form action="">
<textarea>Type here</textarea>
<p class="submit"><input type="submit" value="Send"></p>
</form>
</div>
<img class="avatar" src="https://placekitten.com/40/40" alt="Person A Avatar">
</div>
</div>
<style>
.chat-container form {
margin: 0;
}
.contains-textarea {
/* flex box should take all available width */
flex: 1;
}
p.submit {
text-align: right;
padding-top: 5px;
}
p.submit input {
border: 2px solid #7572db;
padding: 3 10px;
background-color: #b9b7f2;
}
textarea {
width: 100%;
padding: 5px;
min-height: 60px;
}
.chat-container {
display: flex;
flex-direction: column;
align-items: flex-start;
font-family: Helvetica, sans-serif;
line-height: 1.35;
color: rgba(0, 0, 0, 0.8);
max-width: 600px;
}
.chat-bubble {
border-radius: 10px;
padding: 10px;
margin: 10px;
width: 85%;
border: 1px solid #ccc;
background-color: #e6e5ff;
display: flex;
align-items: start;
}
.chat-bubble.one {
border-color: #b9b7f2;
}
.chat-bubble.two {
/* darker darker green */
border-color: #98d798;
}
.chat-bubble.one img.avatar {
order: -1;
margin-right: 10px;
}
.chat-bubble.two {
background-color: #ccffcc;
align-self: flex-end;
justify-content: space-between;
}
.chat-bubble.two img.avatar {
order: 1;
margin-left: 10px;
}
.chat-bubble p {
margin-top: 0;
}
.chat-bubble p:last-of-type {
margin-bottom: 0;
}
</style>
<script>
var text = "Lorem ipsum dolor sit amet consectetur, adipisicing elit. Distinctio similique quos ratione omnis impedit, est mollitia amet aspernatur inventore consectetur, autem dolorum at nemo! Voluptas modi eveniet culpa nobis id?";
var words = text.split(" ");
var container = document.getElementById("animatedText");
container.innerHTML = "";
function addWord(index) {
if (index < words.length) {
container.innerHTML += words[index] + " ";
setTimeout(function() {
addWord(index + 1);
}, 50);
}
}
addWord(0);
</script>
I built a quick ASGI prototype demonstrating server-sent events here: https://gist.github.com/simonw/d3d4773666b863e628b1a60d5a20294d
Pushed a prototype to the web branch.
Currently needs a OPENAI_API_KEY environment variable. I still need to port it to using llm directly: https://llm.datasette.io/en/stable/python-api.html
Hi, First of all, thanks a lot for having created LLM as well as the other CLI tools. Would be so useful to have something like this in order to interact with LLM with http or websockets requests. I am a nodejs developer and could easily integrate any of the llm engines (since the last version) in my apps without to manage the complex installation of the various llm engines. Any ETA about this feature?
Why not just use gradio? In about 50 lines of code you could have tabs for a chat interface, displaying and interacting with your logs etc.
I've been using it for a while and is terrific