fix: use max_completion_tokens for Azure reasoning models
Fixes Azure reasoning models (GPT-5, o1) by using max_completion_tokens instead of max_tokens.
@rekram1-node Could you help to review this PR for https://github.com/sst/opencode/issues/5421
/review
@rekram1-node is anything wrong with the new patch https://github.com/sst/opencode/commit/7b5a00e3094a99ba5935295b3c80e299f38849fa
@junmediatek feel free to resolve any bot comments that you addressed or want to ignore
Also, im not entirely sure if this is the perfect fix and I need to test something w/ it so in the meantime (without requiring code changes from us, you should be able to add this:
import { Plugin } from "@opencode-ai/plugin"
export const AzurePatch: Plugin = async (ctx) => {
return {
auth: {
provider: "gaisf-azure",
loader: async (getAuth, provider) => {
return {
async fetch(input, init) {
const opts = init ?? {}
if (opts.body && typeof opts.body === "string") {
try {
const body = JSON.parse(opts.body)
if (body.max_tokens !== undefined) {
body.max_completion_tokens = body.max_tokens
delete body.max_tokens
opts.body = JSON.stringify(body)
}
} catch (e) {}
}
return fetch(input, {
...opts,
timeout: false,
})
},
}
},
},
}
}
to ~/.config/opencode/plugin/max_completion_tokens.ts
And assuming your azure proxy provider is still called gaisf-azure like it was in your config you showed me, then this should function as expected
You can completely override fetch implementations for any provider via plugins and I just wanted to demonstrate that for you
Also, im not entirely sure if this is the perfect fix and I need to test something w/ it so in the meantime (without requiring code changes from us, you should be able to add this:
import { Plugin } from "@opencode-ai/plugin" export const AzurePatch: Plugin = async (ctx) => { return { auth: { provider: "gaisf-azure", loader: async (getAuth, provider) => { return { async fetch(input, init) { const opts = init ?? {} if (opts.body && typeof opts.body === "string") { try { const body = JSON.parse(opts.body) if (body.max_tokens !== undefined) { body.max_completion_tokens = body.max_tokens delete body.max_tokens opts.body = JSON.stringify(body) } } catch (e) {} } return fetch(input, { ...opts, timeout: false, }) }, } }, }, } }to
~/.config/opencode/plugin/max_completion_tokens.tsAnd assuming your azure proxy provider is still called
gaisf-azurelike it was in your config you showed me, then this should function as expected
@rekram1-node
That is a good idea, I will try it. However, the standard solution is to modify the code, so I still need your help to do more testing. If you encounter any other issues, feel free to contact me at any time.
hi @rekram1-node
This issue can be resolved by the plugin, However, the standard solution is to modify the code, because the API reference in the openai API doc. Additionally, how is the testing progress for the solution to this issue?
hi @rekram1-node
This patch can be merged?