ChatGPT.nvim icon indicating copy to clipboard operation
ChatGPT.nvim copied to clipboard

Functional Question: Is there who uses this great plugin with "Curie" model?

Open PorcoRosso85 opened this issue 2 years ago • 1 comments

Hi, I'm thinking about this theme. If we want to use this plugin, should we set params like this?

{
  welcome_message = WELCOME_MESSAGE, -- set to "" if you don't like the fancy godot robot
  loading_text = "loading",
  question_sign = "", -- you can use emoji if you want e.g. 🙂
  answer_sign = "ﮧ", -- 🤖
  max_line_length = 120,
  yank_register = "+",
  chat_layout = {
    relative = "editor",
    position = "50%",
    size = {
      height = "80%",
      width = "80%",
    },
  },
  settings_window = {
    border = {
      style = "rounded",
      text = {
        top = " Settings ",
      },
    },
  },
  chat_window = {
    filetype = "chatgpt",
    border = {
      highlight = "FloatBorder",
      style = "rounded",
      text = {
        top = " ChatGPT ",
      },
    },
  },
  chat_input = {
    prompt = "  ",
    border = {
      highlight = "FloatBorder",
      style = "rounded",
      text = {
        top_align = "center",
        top = " Prompt ",
      },
    },
  },
  openai_params = {
    model = "curie",
    frequency_penalty = 0,
    presence_penalty = 0,
    max_tokens = 300,
    temperature = 0,
    top_p = 1,
    n = 1,
  },
  openai_edit_params = {
    model = "curie",
    temperature = 0,
    top_p = 1,
    n = 1,
  },
  keymaps = {
    close = { "<C-c>", "<Esc>" },
    yank_last = "<C-y>",
    scroll_up = "<C-u>",
    scroll_down = "<C-d>",
    toggle_settings = "<C-o>",
    new_session = "<C-n>",
    cycle_windows = "<Tab>",
  },
}

If anyone already tried, please reply me.

PorcoRosso85 avatar Jan 21 '23 11:01 PorcoRosso85

curie's params

prompt: The text prompt to generate a response for.
model: The name of the model to use. In this case, "curie"
max_tokens: The maximum number of tokens (words) to generate in the response.
stop : A string, or a sequence of strings, that signals the API to stop generating further tokens.
temperature : Controls the "creativity" of the generated text. Values between 0 and 1 will generate conservative text, while values greater than 1 will generate more creative or varied text.
top_p : Only sample from the most likely tokens with a cumulative probability greater than or equal to this value.
n : The number of responses to generate.
frequency_penalty : The model will be less likely to generate text that has a high frequency of common words.
presence_penalty : The model will be less likely to generate text that contains the exact prompt.
stream : A boolean value that controls whether the API will return the full response at once or in a stream.

PorcoRosso85 avatar Jan 21 '23 11:01 PorcoRosso85