CopilotChat.nvim icon indicating copy to clipboard operation
CopilotChat.nvim copied to clipboard

Copilot Extensions specific configuration support

Open biosugar0 opened this issue 1 year ago • 6 comments

Thank you for adding such a convenient feature!
https://github.com/CopilotC-Nvim/CopilotChat.nvim/pull/490

It would be even more useful if we could configure settings for each agent individually. For example, Perplexity AI allows you to use models like those described here:
https://docs.perplexity.ai/guides/model-cards

It might be helpful to have a configuration like the example below:

local opts = {
  debug = false,
  model = 'claude-3.5-sonnet', -- default model
  agents = { -- agent-specific configurations
    perplexityai = {
      model = 'llama-3.1-sonar-huge-128k-online', -- agent-specific model
    },
  },
  prompts = prompts,
}
local chat = require('CopilotChat')
chat.setup(opts)

biosugar0 avatar Nov 19 '24 01:11 biosugar0

I haven’t conducted a detailed investigation yet, but since these Agents can be custom-built, it might be better to allow flexibility in their settings, depending on the case. For example, while the "model" parameter mentioned is relatively general, there could potentially be parameters specific to each Agent.

biosugar0 avatar Nov 19 '24 01:11 biosugar0

Hmm so if I understand this correctly, this is sent to the /completions endpoint right. So I can just accept anything in the value for the agent config and then just merge it with the request we are building and just use that

deathbeam avatar Nov 19 '24 13:11 deathbeam

So i implemented it but idk if theres any way to verify it works (as the response do not contains model)

image

deathbeam avatar Nov 19 '24 13:11 deathbeam

It’s difficult to verify the behavior without a response. I noticed this issue because the behavior of websites referenced by PerplexityAI had changed.

When I used GPT-4o to make a search request in Japanese through PerplexityAI, it referenced Chinese websites and returned the answer in Japanese. However, when I specified llama-3.1-sonar-huge-128k-online, it correctly referenced Japanese websites and provided an appropriate response.

Currently, GPT-4o also seems to provide accurate responses, so it’s no longer a straightforward way to confirm the behavior.

biosugar0 avatar Nov 19 '24 23:11 biosugar0

Hmm yea, dont rly want to add it if we dont know it works. Maybe it can be verified with some other extension? If anyone knows how to verify pls do tell.

deathbeam avatar Nov 20 '24 07:11 deathbeam

@deathbeam Looking at the code in the preview SDK, it seems that parameters can be configured for each agent in the CopilotRequestPayload.

https://github.com/copilot-extensions/preview-sdk.js/blob/f6756190b2ec70c6aea4eaa0d3caafd1d3f06ba5/index.d.ts#L93-L104

When I tried running and debugging the example at
https://github.com/copilot-extensions/blackbeard-extension locally, it seems that the "model" parameter cannot be configured. While the behavior used to change before, at least for now, it appears that modifying "model" is not possible.

biosugar0 avatar Nov 22 '24 06:11 biosugar0