codex icon indicating copy to clipboard operation
codex copied to clipboard

Local model base_url not working

Open JimmyBenHur opened this issue 1 month ago • 6 comments

What version of Codex is running?

codex-cli 0.63.0

What subscription do you have?

ChatGPT Plus

Which model were you using?

qwen3-coder-30b

What platform is your computer?

Microsoft Windows NT 10.0.26100.0 x64

What issue are you seeing?

When using a local model with LM Studio, which I have hosted on a different PC in the same network, the base_url, which contains the IP adress of the other computer, is ignored. This is my config.toml:

model_provider="lmstudio"
model = "qwen3-coder-30b"

[model_providers.lmstudio]
name = "lmstudio"
base_url = "http://xxx.xxx.xx.xx:1234/v1"

[profiles.qwen3-coder-30b]
model_provider = "lmstudio"
model = "qwen/qwen3-coder-30b"

After the CLI fails to connect it shows this error: Connection failed: error sending request for url (http://localhost:1234/v1/responses), which means that it didn't use the IP from the base_url but the localhost URL.

What steps can reproduce the bug?

Host a local model in you local network on a different machine and do the config as above.

What is the expected behavior?

It is supposed to use the base_url, otherwise it is not able to connect, since the URL is false.

Additional information

No response

JimmyBenHur avatar Nov 22 '25 18:11 JimmyBenHur

I'm also experiencing this issue. The last version that didn't have this issue is 0.58.0

jeffliu-LL avatar Nov 26 '25 15:11 jeffliu-LL

i'm also having this issue. base_url in model config ignored

neurostream avatar Nov 29 '25 03:11 neurostream

I can confirm this is happening on macOS after updating to codex-cli v0.64.0.

I'm running qwen/qwen3-code-30b on LM Studio (localhost:1234) - macOS. My config explicitly omits wire_api (which should default to "chat"), but the CLI is attempting to hit /v1/responses, which LM Studio doesn't support.

Downgrading to v0.63.0 fixed the connection immediately, so this seems to be a regression in how v0.64.0 handles the default API wire format for local providers.

greg-keene avatar Dec 03 '25 01:12 greg-keene

I see that there is a pull request which hasn't been merged, actually it was closed. Any updates on this? Really waiting for this fix.

JimmyBenHur avatar Dec 04 '25 21:12 JimmyBenHur

Yeah, we're stuck back on 0.58.0 until this is fixed.

On Thu, Dec 4, 2025 at 3:02 PM Elias Kassebaum @.***> wrote:

JimmyBenHur left a comment (openai/codex#7152) https://github.com/openai/codex/issues/7152#issuecomment-3614306894

I see that there is a pull request which hasn't been merged, actually it was closed. Any updates on this? Really waiting for this fix.

— Reply to this email directly, view it on GitHub https://github.com/openai/codex/issues/7152#issuecomment-3614306894, or unsubscribe https://github.com/notifications/unsubscribe-auth/AH7QTBWGKV65443RRPWTXBT4ACOOFAVCNFSM6AAAAACM44IKOOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTMMJUGMYDMOBZGQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>

kentyman23 avatar Dec 04 '25 21:12 kentyman23

Also experiencing this issue on codex >= 0.59.0 on MacOS + Ubuntu 24.04. Downgrading to 0.58.0 fixed the issue.

jbkroner avatar Dec 04 '25 21:12 jbkroner

I tried this on codex 0.66.0 and still encountering this. MacOS inference engine, Ubuntu 24.04 server running codex

neurostream avatar Dec 10 '25 05:12 neurostream

Which wire_api are you using? We recommend using wire_api = "responses". The latest versions of LM Studio support the responses API.

etraut-openai avatar Dec 10 '25 05:12 etraut-openai

We've announced that we're deprecating support in codex for the older "chat/completions" wire_api. Refer to this discussion thread for details.

etraut-openai avatar Dec 10 '25 05:12 etraut-openai

I tried your wire_api suggestion, but nothing changes. It's the local network (IP) URL which causes the problems, since the localhost URL is always used.

JimmyBenHur avatar Dec 10 '25 17:12 JimmyBenHur