gpt-pilot icon indicating copy to clipboard operation
gpt-pilot copied to clipboard

Agent/conversation/request level temperature

Open nalbion opened this issue 2 years ago • 2 comments

Currently the temperature is hard-coded to 1 - full creative mode.

This might be good for testing and some agents and prompts, but not every request. It should be configurable from a variety of sources:

  • each Agent should have a default temperature
  • each conversation should be able to over-ride the Agent's default.
  • each function that calls create_gpt_chat_completion() should be able to over-ride the agent/conversation
  • some models may need a maximum temperature (see also #120 )
  • user should be able to define max/min/default temperature globally

nalbion avatar Oct 21 '23 05:10 nalbion

Also important for local models, ex. llama is generally run at 0.7-0.8, higher gets weird with code.

CRD716 avatar Oct 22 '23 07:10 CRD716