opencode icon indicating copy to clipboard operation
opencode copied to clipboard

[FEATURE]: Support endpoint `/v1/responses` for xAI Providers

Open H0llyW00dzZ opened this issue 3 weeks ago • 2 comments

Feature hasn't been suggested before.

  • [x] I have verified this feature I'm about to request hasn't been suggested before.

Describe the enhancement you want to request

I'd like to request the addition of a new API endpoint /v1/responses specifically tailored for xAI providers (e.g., integrating with Grok or other xAI models). This endpoint would allow developers to generate and retrieve responses from xAI models in a standardized way, similar to existing endpoints like /v1/chat/completions but optimized for xAI's unique capabilities.

What do I want to change or add?
Currently, xAI integrations might rely on custom wrappers or indirect calls to existing OpenAI-compatible endpoints, which can lead to inconsistencies or limitations when leveraging xAI-specific features. Adding /v1/responses would provide a dedicated, provider-specific path that supports:

  • Request Format: JSON payload similar to chat completions, e.g.:
{
  "input": [
    {
      "role": "system",
      "content": "You are a helpful assistant that can answer questions and help with tasks."
    },
    {
      "role": "user",
      "content": "What is 101*3?"
    }
  ],
  "model": "grok-4-0709"
}
  • Response Format: Standardized JSON with fields like id, object ("response"), created, model, choices (array with message containing role and content), and usage (prompt/completion tokens). For streaming, it would use Server-Sent Events (SSE).
  • Authentication: API key-based, with rate limiting aligned to xAI tiers (e.g., 10k RPM for free tier).

Benefits of implementing this:

  1. Easier Integration: Developers building multi-provider apps (e.g., using LangChain or Vercel AI SDK) could seamlessly switch to xAI without refactoring code, promoting xAI adoption.

H0llyW00dzZ avatar Nov 04 '25 11:11 H0llyW00dzZ

That’s a great suggestion!
Adding /v1/responses specifically for xAI models would make integrations cleaner and consistent across providers.
It could also simplify migration from OpenAI-compatible APIs — I’d love to see token usage tracking included in the response format.

mirbasit01 avatar Nov 04 '25 12:11 mirbasit01

I think this is already possible you just need to specify the settings in your opencode.json (like baseurl / api) and hook it up to the openai vercel ai sdk provider

rekram1-node avatar Nov 04 '25 15:11 rekram1-node

I think this is already possible you just need to specify the settings in your opencode.json (like baseurl / api) and hook it up to the openai vercel ai sdk provider

@rekram1-node how do config in the opencode.json file?

junmediatek avatar Nov 10 '25 05:11 junmediatek

@junmediatek this goes over it: https://opencode.ai/docs/providers/#custom-provider

And you'd need to use an ai sdk package that supports responses api, rn I think that is only @ai-sdk/openai

rekram1-node avatar Nov 10 '25 05:11 rekram1-node

https://opencode.ai/docs/providers/#custom-provider

@rekram1-node I have config the opencode.json with "npm": "@ai-sdk/openai", however, it does not work, how to config it for /v1/responses flow in the opencode?

junmediatek avatar Nov 10 '25 05:11 junmediatek

@junmediatek you're gonna have to show me your config for me to help more I think

rekram1-node avatar Nov 10 '25 05:11 rekram1-node

{ "$schema": "https://opencode.ai/config.json", "provider": { "gaisf": { "npm": "@ai-sdk/openai-compatible", "name": "gaisf openai", "options": { "baseURL": "https://xxx.xxxx.inc/v1" }, "models": { "azure/gpt-5": { "name": "gpt-5" }, "azure/aide-gpt-4.1": { "name": "gpt-4.1" } } } } }

@rekram1-node could you help me?

junmediatek avatar Nov 10 '25 05:11 junmediatek

@junmediatek what error / issue are u seeing waht is this provider? Ive never seen it

rekram1-node avatar Nov 10 '25 05:11 rekram1-node

2025-11-10T060322.log

@rekram1-node error log

junmediatek avatar Nov 10 '25 06:11 junmediatek

@rekram1-node I have find the root cause, the model name is wrong, change the model to the gpt-5, it work fine

junmediatek avatar Nov 10 '25 06:11 junmediatek