opencode
opencode copied to clipboard
[FEATURE]: Support endpoint `/v1/responses` for xAI Providers
Feature hasn't been suggested before.
- [x] I have verified this feature I'm about to request hasn't been suggested before.
Describe the enhancement you want to request
I'd like to request the addition of a new API endpoint /v1/responses specifically tailored for xAI providers (e.g., integrating with Grok or other xAI models). This endpoint would allow developers to generate and retrieve responses from xAI models in a standardized way, similar to existing endpoints like /v1/chat/completions but optimized for xAI's unique capabilities.
What do I want to change or add?
Currently, xAI integrations might rely on custom wrappers or indirect calls to existing OpenAI-compatible endpoints, which can lead to inconsistencies or limitations when leveraging xAI-specific features. Adding /v1/responses would provide a dedicated, provider-specific path that supports:
- Request Format: JSON payload similar to chat completions, e.g.:
{
"input": [
{
"role": "system",
"content": "You are a helpful assistant that can answer questions and help with tasks."
},
{
"role": "user",
"content": "What is 101*3?"
}
],
"model": "grok-4-0709"
}
- Response Format: Standardized JSON with fields like
id,object("response"),created,model,choices(array withmessagecontainingroleandcontent), andusage(prompt/completion tokens). For streaming, it would use Server-Sent Events (SSE). - Authentication: API key-based, with rate limiting aligned to xAI tiers (e.g., 10k RPM for free tier).
Benefits of implementing this:
- Easier Integration: Developers building multi-provider apps (e.g., using LangChain or Vercel AI SDK) could seamlessly switch to xAI without refactoring code, promoting xAI adoption.
That’s a great suggestion!
Adding /v1/responses specifically for xAI models would make integrations cleaner and consistent across providers.
It could also simplify migration from OpenAI-compatible APIs — I’d love to see token usage tracking included in the response format.
I think this is already possible you just need to specify the settings in your opencode.json (like baseurl / api) and hook it up to the openai vercel ai sdk provider
I think this is already possible you just need to specify the settings in your opencode.json (like baseurl / api) and hook it up to the openai vercel ai sdk provider
@rekram1-node how do config in the opencode.json file?
@junmediatek this goes over it: https://opencode.ai/docs/providers/#custom-provider
And you'd need to use an ai sdk package that supports responses api, rn I think that is only @ai-sdk/openai
https://opencode.ai/docs/providers/#custom-provider
@rekram1-node I have config the opencode.json with "npm": "@ai-sdk/openai", however, it does not work, how to config it for /v1/responses flow in the opencode?
@junmediatek you're gonna have to show me your config for me to help more I think
{ "$schema": "https://opencode.ai/config.json", "provider": { "gaisf": { "npm": "@ai-sdk/openai-compatible", "name": "gaisf openai", "options": { "baseURL": "https://xxx.xxxx.inc/v1" }, "models": { "azure/gpt-5": { "name": "gpt-5" }, "azure/aide-gpt-4.1": { "name": "gpt-4.1" } } } } }
@rekram1-node could you help me?
@junmediatek what error / issue are u seeing waht is this provider? Ive never seen it
@rekram1-node I have find the root cause, the model name is wrong, change the model to the gpt-5, it work fine