opencode icon indicating copy to clipboard operation
opencode copied to clipboard

fix(github-copilot): auto-route GPT-5+ models to Responses API

Open christso opened this issue 3 weeks ago • 7 comments

Automatically routes supported GitHub Copilot models via the Responses API, enabling Responses-only controls (e.g. reasoningEffort/reasoningSummary) for GPT-5+.

  • GPT-5+ models (excluding gpt-5-mini) automatically use Responses API
  • No configuration required

Closes #5866

christso avatar Dec 21 '25 01:12 christso

and u said reasoning doesnt work through completions api? is that a copilot thing?

rekram1-node avatar Dec 21 '25 04:12 rekram1-node

and u said reasoning doesnt work through completions api? is that a copilot thing?

The reasoning fields are OpenAI specific. OpenCode currently use the response api for Codex only for GitHub Copilot. I'm not sure why we did that. Probably because some user complained about GPT-5-mini not working, so we (erroneously) moved all GPT models back to chat completions API except Codex. I propose we enable it for all GitHub Copilot OpenAI models if they support it.

I also ran some evals on GitHub Copilot (see my gist) and it appears Chat Completions API does perform some hidden reasoning (between low and medium). But the point of my PR is to make it transparent and configurable.

christso avatar Dec 21 '25 05:12 christso

OpenCode currently use the response api for Codex only for GitHub Copilot. I'm not sure why we did that

Well github copilot ONLY allows access to the codex models through their responses api, and then for everything else is through chat completions api (with the exception that some models are on both)

Copilot also returns reasoning in a custom format, to properly parse that we need to switch to a custom chat completions client just like we do for responses (cause their responses format is kinda custom too)

rekram1-node avatar Dec 21 '25 06:12 rekram1-node

OpenCode currently use the response api for Codex only for GitHub Copilot. I'm not sure why we did that

Well github copilot ONLY allows access to the codex models through their responses api, and then for everything else is through chat completions api (with the exception that some models are on both)

Copilot also returns reasoning in a custom format, to properly parse that we need to switch to a custom chat completions client just like we do for responses (cause their responses format is kinda custom too)

Exactly - that's the rationale here. Responses API already has the parser built, so we get reasoning support without adding another custom client.

In my issue https://github.com/sst/opencode/issues/5866 I included the list of models and the API they support. My PR is based on that. I've been using gpt-5.2 extensively with medium to high thinking and it's a close substitute to Opus 4.5 (except slower), so it's beneficial to be able to allow plugins to set the reasoning effort.

Model API Route
gpt-4.1 chat
gpt-4o chat
gpt-5 responses
gpt-5-codex responses
gpt-5-mini chat
gpt-5.1 responses
gpt-5.1-codex responses
gpt-5.1-codex-max responses
gpt-5.1-codex-mini responses
gpt-5.2 responses

christso avatar Dec 21 '25 06:12 christso