ChatGPT.nvim icon indicating copy to clipboard operation
ChatGPT.nvim copied to clipboard

Change the default `max_tokens` configuration value

Open jfmainville opened this issue 1 year ago • 2 comments

Currently, the max_tokens value is set to 300 in the default configuration file (config.lua) which causes a high risk of answers from being cutoff when interacting with ChatGPT a model. In that regard, I was wondering if we could increase the max_tokens value to 4096 to reduce this risk?

Also, as the default model is gpt-3.5-turbo at the moment, which supports up to 4096 tokens by default (reference), it would make the process more convenient to new users. This action could also be done for the other available actions like code_readability_analysis and code_completion for example. We could standardize the definition of the max_tokens attribute across all available actions and models.

jfmainville avatar Mar 29 '24 18:03 jfmainville

I agree. I was frequently very annoyed to see my chat completions abruptly stop until I figured out that I just needded to increase max_tokens

thiswillbeyourgithub avatar Apr 07 '24 23:04 thiswillbeyourgithub

my exactly first interaction was cut and it took me a while to understand why

ser avatar Apr 16 '24 11:04 ser