llm
llm copied to clipboard
prompts in python are not saved it the database
Hello!
So I was using llm to make requests using the python api, and it seems that the prompts used in the python api are not saved into the database. For me it was the expected behavior, because the docs says "llm defaults to logging all prompts and responses to a SQLite database.", and nothing on the python API page nor the Logging to SQLite. Is this intended and not documented, a bug, or am I missing something?
small reproducible test: test.py:
import llm
model = llm.get_model("gpt-4o-mini")
response = model.prompt("Five surprising names for a pet pelican")
print(response.text())
uv init && uv add llmuv run llm logs -n 1-> output your last messageuv run test.py-> print 5 namesuv run llm logs -n 1-> should print "Five surprising names for a pet pelican" and the answer, print the same as 2.
Otherwise, thank you very much for your amazing work, it really helps me a lot!
Yeah, I noticed this, too; it seems like the prompts / responses / conversations generated by the Python API are not logged.
I've been trying to figure out if I should write my own logging to SQLite for a Python app powered by LLM, or if there's a way to use the CLI tool's built-in logging.
Trolling around in the source code, though, I found that the Response class returned by, e.g. model.prompt has a log_to_db method. You have to locate the database, but it looks like it may be possible to replicate the CLI tool's behaviour; refer to what the CLI does here, for example.
Also got bitten by this one.
This one could be hairy because a single python API script can call multiple LLMs each with different Prompt + parameter settings etc + async etc .... but would be nice to have this (even as an extra sqlite3/ULID code block ?). Lot of edge cases like attachments and fragments etc that insert separately. Maybe if your jumping LLM from cli to python then maybe byo observability eval tool ? I just looked at Phoenix how they integrated LLMLite ....... Wonder if anyone has seen something like Opik or W&B Weave or ??
The existing LLM log is great just thinking if its a lot of new code either way then maybe external eval integration adds more to the project ?
There's an old issue for this:
- #228
I agree this needs to be more clear in the documentation.