hollama
hollama copied to clipboard
Add support for all Ollama /api/generate parameters
Extended Parameter Support for Ollama API in Hollama Interface
The Hollama interface currently supports a limited set of parameters when making requests to the Ollama generate
API. The current completion request payload looks like this:
let payload: OllamaCompletionRequest = {
model: session.model,
context: session.context,
prompt: session.messages[session.messages.length - 1].content
};
Request
We would like to extend the Hollama interface to support more of the parameters available in the Ollama generate
API. This would provide users with greater control over their interactions with the models.
Proposed Parameters to Add
Based on the Ollama API documentation, I suggest adding support for the following parameters:
-
images
: For multimodal models -
format
: To specify the response format (e.g., JSON) -
options
: For additional model parameters (e.g., temperature) -
system
: To override the system message defined in the Modelfile -
template
: To override the prompt template defined in the Modelfile -
stream
: To control whether the response is streamed or returned as a single object -
raw
: To allow specifying a full templated prompt without formatting -
keep_alive
: To control how long the model stays loaded into memory
Implementation Considerations
- The interface would need to be updated to include input fields/controls for these new parameters (some of these could go in the settings page, but we may want to consider including others in the session page so that the params can be modified on a per session basis)
- Default values should be considered for optional parameters
Questions
- Are there any specific parameters that should be prioritized?
- Are there any concerns about exposing these parameters to users through the Hollama interface?