codex icon indicating copy to clipboard operation
codex copied to clipboard

Stream disconnected before completion (invalid type: null, expected struct Error)

Open voytas75 opened this issue 1 month ago • 32 comments

What version of Codex is running?

codex-cli 0.58.0 and 0.59.0-alpha.5

What subscription do you have?

azureopenai

Which model were you using?

gpt-5-codex and 5.1-codex

What platform is your computer?

Microsoft Windows NT 10.0.26200.0 x64 (codex in WSL2)

What issue are you seeing?

i am on WSL2 Ubuntu@latest and working on python project. Prompted to analyze project and suggest features which was OK. codex suggests. when i choose one idea it started planning, exploring, updating files and unexpectedly thows below: ■ stream disconnected before completion: failed to parse ErrorResponse: invalid type: null, expected struct Error

prompt him to continue and after a few steps it's the same. it can finish implementing a feature. the error appears several times during the work process and regardless of the type of work I do with Codex

What steps can reproduce the bug?

idk, random. maybe a size of project > small.

What is the expected behavior?

No response

Additional information

No response

voytas75 avatar Nov 15 '25 13:11 voytas75

Potential duplicates detected. Please review them and close your issue if it is a duplicate.

  • #6659
  • #5130
  • #6397
  • #4983

Powered by Codex Action

github-actions[bot] avatar Nov 15 '25 13:11 github-actions[bot]

Issues were reviewed and none were found with this error: stream disconnected before completion: failed to parse ErrorResponse: invalid type: null, expected struct Error

voytas75 avatar Nov 15 '25 13:11 voytas75

NOTE: i moved to 0.59.0-alpha.5 i do not see the error, only reconections but rarely

EDIT: 1h working and was fine :) wrote that's fine received ■ stream disconnected before completion: failed to parse ErrorResponse: invalid type: null, expected struct Error, hahaha. :/

voytas75 avatar Nov 15 '25 16:11 voytas75

jump on 5.1-codex = same situation :(

voytas75 avatar Nov 15 '25 17:11 voytas75

Same

antonsoo avatar Nov 15 '25 18:11 antonsoo

Same here — it seems to occur more frequently when using the *-codex models.

thingersoft avatar Nov 16 '25 19:11 thingersoft

Same issue on GPT-5. Rolling back to o3 fixes it. codex-cli 0.58.0

Uploaded thread: 019a8beb-ca38-7e33-bf88-bf3879ae83f5

Have closed my duplicate post.

datacatalyst-io avatar Nov 17 '25 08:11 datacatalyst-io

Same here, even on 0.59.0-alpha.5

christianvossCaeli avatar Nov 17 '25 12:11 christianvossCaeli

Same issue, started happening after 5.1 upgrade (am on azure aswell)

vikduf avatar Nov 17 '25 14:11 vikduf

seeing the same behavior for azure on 0.59.0-alpha.6

gadogado avatar Nov 18 '25 00:11 gadogado

same issue here, on azure as well, started happening after 5.1

Zanzavar avatar Nov 18 '25 08:11 Zanzavar

same issue here, on azure as well

ksavvopoulos avatar Nov 18 '25 15:11 ksavvopoulos

same, 5.1-codex and 5-codex, both 0.58.0

andrefilipe90 avatar Nov 18 '25 18:11 andrefilipe90

adding myself to the complainants (although I'm not on windows)

suspiciousfellow avatar Nov 18 '25 18:11 suspiciousfellow

Putting the stream_max_retries up to 100 appears to be working as a rather rubbish work around, fwiw.

suspiciousfellow avatar Nov 18 '25 18:11 suspiciousfellow

Same here. Tried to downgrade CLI-version aswell, didn't help.

KristoferKinberg avatar Nov 18 '25 20:11 KristoferKinberg

Same problem. Am on azure gpt-5.1 and a mac installed codex via homebrew 0.58.0. Have to tell it continue until it gets its work done.

digitalnelson avatar Nov 19 '25 11:11 digitalnelson

Hey guys, I've increased my TPM (which also increased the RPM you can do ) on my azure deployment and so far I haven't gotten the issue again, not sure if that's the fix or I've just been lucky right now

vikduf avatar Nov 19 '25 11:11 vikduf

Hey guys, I've increased my TPM (which also increased the RPM you can do ) on my azure deployment and so far I haven't gotten the issue again, not sure if that's the fix or I've just been lucky right now

Seems to have improved it for me. Also on Azure with gpt-5.1-codex (high).

pblan avatar Nov 19 '25 13:11 pblan

Higher TPM is a workaround, not a fix. The original error is due to rate-limiting but should be handled gracefully. That null is returned throws the breaking error.

datacatalyst-io avatar Nov 19 '25 16:11 datacatalyst-io

I am getting this issue on a my mac. Upgraded to 0.60.1 and same problem. It fails with gpt-5-codex and gpt-5.1-codex. It is utterly unusable at this point.

ChrisEdwards avatar Nov 20 '25 03:11 ChrisEdwards

Same here as well, tried latest codex-cli 0.61.0 (windows) and still getting"stream disconnected before completion: failed to parse ErrorResponse: invalid type: null, expected struct Error"

snehginb avatar Nov 21 '25 00:11 snehginb

same issue here after upgrading to 0.61.0 (Mac) using azure. codex with openai chatgpt subscription works fine

leonarddai-solutino avatar Nov 21 '25 07:11 leonarddai-solutino

If anyone has Azure support, it's probably worth reporting it there. This isn't a bug in either Azure or Codex per se, it's an integration problem between them. Given that one likely response is to shift from the Azure API to the direct Open AI API (= what we have done), I think Microsoft will be more motivated to fix this than OpenAI...

datacatalyst-io avatar Nov 21 '25 15:11 datacatalyst-io

I reported this to Azure support their suggestion was to up the TKM on the deployed model, this has mostly resolved my issues. I'm going to share this thread with my CSM and the support ticket.

jfreemanwbd avatar Nov 21 '25 16:11 jfreemanwbd

I reported this to Azure support their suggestion was to up the TKM on the deployed model, this has mostly resolved my issues. I'm going to share this thread with my CSM and the support ticket.

@jfreemanwbd Is there any recommendation on what could be a decent TPM, we use it for some PoC only and default 100K was good enough till this stream error popped up?

snehginb avatar Nov 21 '25 16:11 snehginb

Out of frustration I just doubled ours to 300k yesterday AM and it's been mostly good, I've not had the error today with some light use, but it did still pop up occasionally yesterday afternoon.

I guess what I'm really wondering about is how we could have been hitting 150k TPM with just myself and one other user on the deployment. Support was unable to point to any place that an error was surfaced indicating this was actually the problem. I'm also going to post this thread in the Microsoft Foundry Discord.

jfreemanwbd avatar Nov 21 '25 16:11 jfreemanwbd

It is so annoying; it’s starting to occur way too frequently. :(

Setup: Version: v0.63.0 Model: gpt-5.1-codex (medium) MacOs, VSCode

Error: ■ stream disconnected before completion: failed to parse ErrorResponse: invalid type: null, expected struct Error

› continue

• Explored └ Search progressive in api Read composer.py Search PayloadGenerator in src Search build_adapter_registry Read init.py

■ stream disconnected before completion: failed to parse ErrorResponse: invalid type: null, expected struct Error

› continue

• Explored └ List ls Read architecture.md Search progressive in architecture.md Search adapter in architecture.md

■ stream disconnected before completion: failed to parse ErrorResponse: invalid type: null, expected struct Error

azhar-alhasan avatar Nov 25 '25 07:11 azhar-alhasan

Higher TPM is a workaround, not a fix. The original error is due to rate-limiting but should be handled gracefully. That null is returned throws the breaking error.

This workaround resolved the issue for me. I set the TPM as high as I could and the errors stopped.

JariHuomo avatar Nov 25 '25 07:11 JariHuomo

Also on azure. Getting this on gpt-5-codex with 200K TpM and gpt-5.1-codex with 100K TpM. I don't see any options to increase, but I can fill out a form to request a quota increase.

On macOS, so issue seems mislabeled.

Also, it does not feel like it is TPM-related. Feels more like time-of day.

E.g. I have been using codex heavily for a few hours, with it taking many iterations and working for a long time. No issue. Then it started popping up. Went looking for a solution. Came here, read the thread. Posted a message. 15 min later, it errors out with this error almost immediately.

adamal avatar Nov 26 '25 20:11 adamal