[Feature] Please add support of OpenAI pro models
Problem Description
The app doesn't support OpenAI's pro-tier reasoning models like o1-pro and o3-pro. When you try to use them, you get this bullshit error: "This is not a chat model and thus not supported in the v1/chat/completions endpoint."
These are OpenAI's most advanced models - they do deep reasoning, can think for minutes before responding, and crush complex problems. But right now there's no way to access them through this interface, which is frustrating as hell when you need that level of reasoning power.
Proposed Solution
Add support for OpenAI's pro reasoning models:
-
o1-pro- the full reasoning model -
o3-pro- when it drops - Whatever other pro models they release
The app should automatically detect these models and route them to the right endpoint. Also need to handle the different response format since these models think differently - they do internal reasoning before giving you the final answer.
Additional Context
Pro models are game-changers for:
- Complex math and logic problems
- Multi-step reasoning tasks
- Code debugging and architecture decisions
- Research and analysis that needs deep thinking
These models literally think before they speak - they can spend 30+ seconds reasoning internally before giving you an answer. That's why they need different API handling.
Right now if you want to use o1-pro, you have to switch to the OpenAI playground or write your own API calls. Pain in the ass.
The pro models also cost more per token but are worth it for hard problems. Would be sick to have them available here with proper reasoning time indicators and all that.
Also for o3-deep-reasearch
https://platform.openai.com/docs/models/o3-deep-research
This requires the responses API instead of the chat-compleitions API.
need response API support
yea, still not fix
yea, still not fix
< SOLUTION >
TL;DR:
- Add OpenRouter as a Model Provider in Chatbox
- OpenAI key must be setup in OpenRouter for o3-pro to work
- o3-pro works, but lacks 'reasoning effort' parameter control
- Now all OpenAI Response API models are selectable
Verbose instructions:
Workaround to get o3-pro (o1-pro, codex-mini-latest, etc.) working in Chatbox:
Note: this workaround does not expose the 'Reasoning Effort' parameter controls. Everything else works 100% and even bills directly from your OpenAI account. Be sure to sign up for OpenRouter, get your API key, AND add/enable your OpenAI API key under OpenRouter > Settings > Integrations (BYOK) as it is required for o3-pro (and likely all 'Response API only' OpenAI models).
In Chatbox > Settings > Model Provider > add OpenRouter with your OpenRouter API key:
- API Key: <OPENROUTER_API_KEY>
- API Host: openrouter.ai
- API Path:
Click 'Fetch' and select from the hundreds of LLMs accessible through OpenRouter. o3-pro is on the list.
Click the green + to add o3-pro. (Or select all of them as I did.)
Done!