ollama
ollama copied to clipboard
allow temperature to be set on command line ( w/out using a modelfile )
would be super helpful to set temperature for models via command line, rather than having to create a separate model file for every model and temperature combination.
Trying to get a handle on the use case here. Are you looking to do something like:
$ ollama run --temperature 0.7 gemma2
>>>
instead of:
$ ollama run gemma2
>>> /set parameter temperature 0.7
>>>
yes exactly.
the second example is useful for one-off tests, but since '/set parameter temperature' can't be scripted, it limits all situations where the prompt is generated outside of ollama (eg when prompt isn't created interactively).
Understood. We currently use expect for this sort of scripting, and command line args would be generally useful.
#!/bin/bash
temperature=0
num_ctx=2048
eval set -- $(getopt --options=t:,n: --longoptions=temperature:,num_ctx: --name "$0" -- "$@")
while : ; do
case "$1" in
-t|--temperature) temperature=$2
shift 2 ;;
-n|--num_ctx) num_ctx=$2
shift 2 ;;
--) shift
break ;;
*) exit 1 ;;
esac
done
args=("${@:1:2}")
[ "${#@}" == 2 ] && { command=interact ; } || { command='send "'"${@:3}"'\r" ; expect ">>>" close' ; }
expect -f <(cat <<EOF
spawn ollama ${args[*]}
expect ">>>"
send "/set parameter temperature $temperature\r" ; expect ">>>"
send "/set parameter num_ctx $num_ctx\r" ; expect ">>>"
$command
EOF)
yes, it can also be done w/curl using REST api:
curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt": "who was president of US in 2023",
"options": {
"temperature": 0
}
}'
but since temperature and the prompt sort of "go-together" in terms of how much they can impact the response, it would be faster to be able to specify both (vs just prompt as is now possible) on the the built-in command line, in one place.
Looking at this briefly... because based on the above two workarounds, conceptually this feature seems like it should be easy to add. Perhaps people who know more about the source might comment further
I was able to see that it appears that the 'runOptions' structure corresponds to the same options listed in 'ollama run --help' in the cli
So ideally the goal would be to parse a new option --temperature into runOptions, eg:
type runOptions struct {
Model string
ParentModel string
Prompt string
Messages []api.Message
WordWrap bool
Format string
System string
Template string
Images []api.ImageData
Options map[string]interface{}
MultiModal bool
KeepAlive *api.Duration
Temperature float64 // not sure if this is valid decl of golang field
}
then questions might be:
- where does parsing for cli-based run command options occur?
- what is done when '/set temperature' is called during interactive chat?
My question about this on StackExchange has over 10k views now, so it seems that many people would appreciate this feature!
Patch allowing that: https://github.com/ollama/ollama/pull/8340