gpt-pilot
gpt-pilot copied to clipboard
Agent/conversation/request level temperature
Currently the temperature is hard-coded to 1 - full creative mode.
This might be good for testing and some agents and prompts, but not every request. It should be configurable from a variety of sources:
- each Agent should have a default temperature
- each conversation should be able to over-ride the Agent's default.
- each function that calls
create_gpt_chat_completion()should be able to over-ride the agent/conversation - some models may need a maximum temperature (see also #120 )
- user should be able to define max/min/default temperature globally
Also important for local models, ex. llama is generally run at 0.7-0.8, higher gets weird with code.