llm
llm copied to clipboard
[Q] What happens when `--continue` and `--model` are used together?
What happens when --continue
and --model
are used together?
Will the new model specified by --model
be used, even if the previous conversation had a different model?
This is unclear in the docs:
This will re-send the prompts and responses for the previous conversation as part of the call to the language model. Note that this can add up quickly in terms of tokens, especially if you are using expensive models. --continue will automatically use the same model as the conversation that you are continuing, even if you omit the -m/--model option.
I'm not sure myself - I'll figure this out and document it.