llm icon indicating copy to clipboard operation
llm copied to clipboard

prompts in python are not saved it the database

Open simonmaeldev opened this issue 10 months ago • 4 comments

Hello!

So I was using llm to make requests using the python api, and it seems that the prompts used in the python api are not saved into the database. For me it was the expected behavior, because the docs says "llm defaults to logging all prompts and responses to a SQLite database.", and nothing on the python API page nor the Logging to SQLite. Is this intended and not documented, a bug, or am I missing something?

small reproducible test: test.py:

import llm

model = llm.get_model("gpt-4o-mini")
response = model.prompt("Five surprising names for a pet pelican")
print(response.text())
  1. uv init && uv add llm
  2. uv run llm logs -n 1 -> output your last message
  3. uv run test.py -> print 5 names
  4. uv run llm logs -n 1 -> should print "Five surprising names for a pet pelican" and the answer, print the same as 2.

Image

Otherwise, thank you very much for your amazing work, it really helps me a lot!

simonmaeldev avatar Feb 04 '25 12:02 simonmaeldev

Yeah, I noticed this, too; it seems like the prompts / responses / conversations generated by the Python API are not logged.

I've been trying to figure out if I should write my own logging to SQLite for a Python app powered by LLM, or if there's a way to use the CLI tool's built-in logging.


Trolling around in the source code, though, I found that the Response class returned by, e.g. model.prompt has a log_to_db method. You have to locate the database, but it looks like it may be possible to replicate the CLI tool's behaviour; refer to what the CLI does here, for example.

danj-ca avatar Feb 15 '25 18:02 danj-ca

Also got bitten by this one.

koaning avatar Mar 11 '25 20:03 koaning

This one could be hairy because a single python API script can call multiple LLMs each with different Prompt + parameter settings etc + async etc .... but would be nice to have this (even as an extra sqlite3/ULID code block ?). Lot of edge cases like attachments and fragments etc that insert separately. Maybe if your jumping LLM from cli to python then maybe byo observability eval tool ? I just looked at Phoenix how they integrated LLMLite ....... Wonder if anyone has seen something like Opik or W&B Weave or ??

The existing LLM log is great just thinking if its a lot of new code either way then maybe external eval integration adds more to the project ?

jimmy6DOF avatar Apr 10 '25 10:04 jimmy6DOF

There's an old issue for this:

  • #228

I agree this needs to be more clear in the documentation.

simonw avatar May 24 '25 18:05 simonw