Stream disconnected before completion (invalid type: null, expected struct Error)
What version of Codex is running?
codex-cli 0.58.0 and 0.59.0-alpha.5
What subscription do you have?
azureopenai
Which model were you using?
gpt-5-codex and 5.1-codex
What platform is your computer?
Microsoft Windows NT 10.0.26200.0 x64 (codex in WSL2)
What issue are you seeing?
i am on WSL2 Ubuntu@latest and working on python project. Prompted to analyze project and suggest features which was OK. codex suggests. when i choose one idea it started planning, exploring, updating files and unexpectedly thows below:
■ stream disconnected before completion: failed to parse ErrorResponse: invalid type: null, expected struct Error
prompt him to continue and after a few steps it's the same. it can finish implementing a feature. the error appears several times during the work process and regardless of the type of work I do with Codex
What steps can reproduce the bug?
idk, random. maybe a size of project > small.
What is the expected behavior?
No response
Additional information
No response
Potential duplicates detected. Please review them and close your issue if it is a duplicate.
- #6659
- #5130
- #6397
- #4983
Powered by Codex Action
Issues were reviewed and none were found with this error: stream disconnected before completion: failed to parse ErrorResponse: invalid type: null, expected struct Error
NOTE: i moved to 0.59.0-alpha.5 i do not see the error, only reconections but rarely
EDIT: 1h working and was fine :) wrote that's fine received ■ stream disconnected before completion: failed to parse ErrorResponse: invalid type: null, expected struct Error, hahaha. :/
jump on 5.1-codex = same situation :(
Same
Same here — it seems to occur more frequently when using the *-codex models.
Same issue on GPT-5. Rolling back to o3 fixes it. codex-cli 0.58.0
Uploaded thread: 019a8beb-ca38-7e33-bf88-bf3879ae83f5
Have closed my duplicate post.
Same here, even on 0.59.0-alpha.5
Same issue, started happening after 5.1 upgrade (am on azure aswell)
seeing the same behavior for azure on 0.59.0-alpha.6
same issue here, on azure as well, started happening after 5.1
same issue here, on azure as well
same, 5.1-codex and 5-codex, both 0.58.0
adding myself to the complainants (although I'm not on windows)
Putting the stream_max_retries up to 100 appears to be working as a rather rubbish work around, fwiw.
Same here. Tried to downgrade CLI-version aswell, didn't help.
Same problem. Am on azure gpt-5.1 and a mac installed codex via homebrew 0.58.0. Have to tell it continue until it gets its work done.
Hey guys, I've increased my TPM (which also increased the RPM you can do ) on my azure deployment and so far I haven't gotten the issue again, not sure if that's the fix or I've just been lucky right now
Hey guys, I've increased my TPM (which also increased the RPM you can do ) on my azure deployment and so far I haven't gotten the issue again, not sure if that's the fix or I've just been lucky right now
Seems to have improved it for me. Also on Azure with gpt-5.1-codex (high).
Higher TPM is a workaround, not a fix. The original error is due to rate-limiting but should be handled gracefully. That null is returned throws the breaking error.
I am getting this issue on a my mac. Upgraded to 0.60.1 and same problem. It fails with gpt-5-codex and gpt-5.1-codex. It is utterly unusable at this point.
Same here as well, tried latest codex-cli 0.61.0 (windows) and still getting"stream disconnected before completion: failed to parse ErrorResponse: invalid type: null, expected struct Error"
same issue here after upgrading to 0.61.0 (Mac) using azure. codex with openai chatgpt subscription works fine
If anyone has Azure support, it's probably worth reporting it there. This isn't a bug in either Azure or Codex per se, it's an integration problem between them. Given that one likely response is to shift from the Azure API to the direct Open AI API (= what we have done), I think Microsoft will be more motivated to fix this than OpenAI...
I reported this to Azure support their suggestion was to up the TKM on the deployed model, this has mostly resolved my issues. I'm going to share this thread with my CSM and the support ticket.
I reported this to Azure support their suggestion was to up the TKM on the deployed model, this has mostly resolved my issues. I'm going to share this thread with my CSM and the support ticket.
@jfreemanwbd Is there any recommendation on what could be a decent TPM, we use it for some PoC only and default 100K was good enough till this stream error popped up?
Out of frustration I just doubled ours to 300k yesterday AM and it's been mostly good, I've not had the error today with some light use, but it did still pop up occasionally yesterday afternoon.
I guess what I'm really wondering about is how we could have been hitting 150k TPM with just myself and one other user on the deployment. Support was unable to point to any place that an error was surfaced indicating this was actually the problem. I'm also going to post this thread in the Microsoft Foundry Discord.
It is so annoying; it’s starting to occur way too frequently. :(
Setup: Version: v0.63.0 Model: gpt-5.1-codex (medium) MacOs, VSCode
Error: ■ stream disconnected before completion: failed to parse ErrorResponse: invalid type: null, expected struct Error
› continue
• Explored └ Search progressive in api Read composer.py Search PayloadGenerator in src Search build_adapter_registry Read init.py
■ stream disconnected before completion: failed to parse ErrorResponse: invalid type: null, expected struct Error
› continue
• Explored └ List ls Read architecture.md Search progressive in architecture.md Search adapter in architecture.md
■ stream disconnected before completion: failed to parse ErrorResponse: invalid type: null, expected struct Error
Higher TPM is a workaround, not a fix. The original error is due to rate-limiting but should be handled gracefully. That null is returned throws the breaking error.
This workaround resolved the issue for me. I set the TPM as high as I could and the errors stopped.
Also on azure. Getting this on gpt-5-codex with 200K TpM and gpt-5.1-codex with 100K TpM. I don't see any options to increase, but I can fill out a form to request a quota increase.
On macOS, so issue seems mislabeled.
Also, it does not feel like it is TPM-related. Feels more like time-of day.
E.g. I have been using codex heavily for a few hours, with it taking many iterations and working for a long time. No issue. Then it started popping up. Went looking for a solution. Came here, read the thread. Posted a message. 15 min later, it errors out with this error almost immediately.