llm icon indicating copy to clipboard operation
llm copied to clipboard

`llm --continue` does not use the last logged conversation id

Open serpro69 opened this issue 6 months ago • 6 comments

➜ llm logs list | grep conversation
# 2025-05-30T11:22:47    conversation: foo
# 2025-05-30T11:31:28    conversation: foo
# 2025-05-31T12:57:12    conversation: bar

Here, the first two (or rather 3rd and 2nd last) conversations were done with llm --continue --conversation foo. However, the last prompt I only ran llm --continue and that seems to have used a conversation id (and model) from another prompt (or at least logged a wrong conversation id and model)

Is this intended behavior and I'm misunderstanding how --continue should be used?

serpro69 avatar May 31 '25 13:05 serpro69

You're not supposed to use both -c/--continue and --cid/--conversation at the same time - the -c option is meant to be a shortcut for --cid last-conversation-id.

It may be a bug that the tool doesn't return an error if you attempt to use both at once!

simonw avatar May 31 '25 15:05 simonw

Here's the relevant code: https://github.com/simonw/llm/blob/330a5686833b9084c8f3d2de18701cbc4328491b/llm/cli.py#L724-L732

Which calls this:

https://github.com/simonw/llm/blob/330a5686833b9084c8f3d2de18701cbc4328491b/llm/cli.py#L1224-L1238

simonw avatar May 31 '25 15:05 simonw

But maybe I'm misunderstanding what you reported?

Here, the first two (or rather 3rd and 2nd last) conversations were done with llm --continue --conversation foo. However, the last prompt I only ran llm --continue and that seems to have used a conversation id (and model) from another prompt (or at least logged a wrong conversation id and model)

I'm trying to figure out the right steps to reproduce. I just tried this:

llm "hi"
llm logs -s # to get the conversation ID
llm --conversation 01jwkfb2hf55k977zdxf7nqhan --continue "say it twice"
llm --conversation 01jwkfb2hf55k977zdxf7nqhan --continue "three more times"
llm -c 'Now say goodbye'

Here's the llm logs -c log:

2025-05-31T15:28:34 conversation: 01jwkfb2hf55k977zdxf7nqhan id: 01jwkfb1edxaqsj738zn926fqk

Model: gpt-4.1-mini

Prompt

hi

Response

Hello! How can I assist you today?

2025-05-31T15:29:28

Prompt

say it twice

Response

Hello! How can I assist you today?
Hello! How can I assist you today?

2025-05-31T15:29:38

Prompt

three more times

Response

Hello! How can I assist you today?
Hello! How can I assist you today?
Hello! How can I assist you today?
Hello! How can I assist you today?
Hello! How can I assist you today?

2025-05-31T15:29:49

Prompt

Now say goodbye

Response

Goodbye! If you need anything else, feel free to ask.

That looks correct to me - both the replies where I used --conversation and the last one where I just used -c were recorded in the same thread.

simonw avatar May 31 '25 15:05 simonw

I'm not entirely sure if I should add an error for the -c --conversation problem - right now if you pass both the -c one is silently ignored in favor of the conversation ID you passed, which I don't think is confusing - I think it feels intuitive that the explicit ID would "win" in that case.

simonw avatar May 31 '25 15:05 simonw

Right, then I was using it kind of wrong (well, kind of...) But I still thought that --continue on its own should have continued the last conversation, which it doesn't seem like it did when I looked at the logs. That's the original issue I wanted to report. I ran llm --conversation <id> twice (with --continue option, but that shouldn't matter, as you said, it was just ignored), and then ran llm --continue but in the logs that last prompt was logged with a different conversation id.

serpro69 avatar Jun 01 '25 12:06 serpro69

So I just tried to reproduced it and here's the result I'm getting.

These are the commands I ran with an empty log db:

llm models default gemini-2.5-flash-preview-05-20
llm "Hello. Just say hi back"
llm logs -s
llm --conversation 01jwnq123gn0485nmzyvaxk75x "Greetings. Just greet back"
llm --conversation 01jwnq123gn0485nmzyvaxk75x "Greetings. Just greet back again"
llm logs -s
llm "Hi, how are you" # starting a new coversation
llm logs -s
llm --conversation 01jwnq123gn0485nmzyvaxk75x "How are you?" # continue the older one with explicit id
llm logs -s
llm --continue "How are you doing?" # was hoping this would continue 01jwnq123gn0485nmzyvaxk75x, but it didn't?
llm logs -s

This is the output of llm logs -n 0

# 2025-06-01T12:21:23    conversation: 01jwnq123gn0485nmzyvaxk75x id: 01jwnq12420tgcbdx4p3jwmrbz

Model: **gemini-2.5-flash-preview-05-20**

## Prompt

Hello. Just say hi back

## Response

Hi back!

# 2025-06-01T12:21:55    conversation: 01jwnq123gn0485nmzyvaxk75x id: 01jwnq209g74ecj71cejhn9z32

Model: **gemini-2.5-flash-preview-05-20**

## Prompt

Greetings. Just greet back

## Response

Greetings!

# 2025-06-01T12:22:03    conversation: 01jwnq123gn0485nmzyvaxk75x id: 01jwnq28bmyyrna250ftk62vqj

Model: **gemini-2.5-flash-preview-05-20**

## Prompt

Greetings. Just greet back again

## Response

Greetings!

# 2025-06-01T12:22:15    conversation: 01jwnq2ntvdhgtwmjpfzs2cvkf id: 01jwnq2nvddmf9dk81tv5174n8

Model: **gemini-2.5-flash-preview-05-20**

## Prompt

Hi, how are you

## Response

Hi! I'm doing well, thank you for asking.

How are you today?

# 2025-06-01T12:22:39    conversation: 01jwnq123gn0485nmzyvaxk75x id: 01jwnq3bm7kym46mfczfh9psax

Model: **gemini-2.5-flash-preview-05-20**

## Prompt

How are you?

## Response

As an AI, I don't have feelings or a physical state, so I don't "feel" in the way humans do. However, I am functioning perfectly and ready to assist you!

How are you today?

# 2025-06-01T12:22:57    conversation: 01jwnq2ntvdhgtwmjpfzs2cvkf id: 01jwnq3xghf0a2kqwvb9ks00y3

Model: **gemini-2.5-flash-preview-05-20**

## Prompt

How are you doing?

## Response

I'm still doing very well, thank you! Always ready to help.

How about you? How are things going on your end?

As you can see, the last command I ran llm --continue. The conversation before that prompt had conversation: 01jwnq123gn0485nmzyvaxk75x, but --continue didn't use that conversation and instead continued the other one.

serpro69 avatar Jun 01 '25 12:06 serpro69