Lots of "request was to large..." errors
Describe the bug I get lots of "Ops, your request was to large..." errors. But this also happen with just a simple question and one file added. And a simple "continue" help most of the time. Until the next error message. I'm currently using GTP-5, but this also happens with Claude 4.
Versions
- Copilot for Xcode: [e.g. 0.43.0]
- Xcode: [e.g. 2.0]
- macOS: [e.g. 15.6.1]
Steps to reproduce Just add some messages to Copilot.
Screenshots
And one more, comes up very often.
Yes. Same problem.
I want to love GitHub Copilot for Xcode, but between the Request too large and and error occurred while generating a response, it's tough.
I find myself resorting to opencode cli more and more. Hopefully GitHub Copilot for Xcode can be as stable and provide more models (xAI models, for example) like opencode.
And one more, comes up very often.
![]()
Would you mind sharing the logs at /Users/<name>/Library/Logs/GitHubCopilot/github-copilot-for-xcode.log?
"Your request is too large..." is rate limit related issue, if you consume lots of tokens in a short time, you may encounter this error. the suggestion is to retry later.
Not sure if the log gets overwritten each session or is cumulative. Here's the one I have. If it's not useful because the log gets overwritten I can try using it for a while and copy the log again after I experience the issues noted. For now I've more or less abandoned it for opencode where there's also access to Grok Code Fast 1 (which truly is fast).
"Your request is too large..." is rate limit related issue, if you consume lots of tokens in a short time, you may encounter this error. the suggestion is to retry later.
I'm not using more or less than before. And why I have 300 Premium requests included, when I get "another rate limit"?
I find myself in the same situation, I keep getting the "Oops, the token limit exceeded. Try to shorten your prompt or start a new conversation."
The problem is that it keeps using premium tokens and doing nothing for them except throwing that error.
This happens on both standard and premium models.
Here's the log:
QUIERO MUCHO MÁS _
Hoe shit
Same here. Unfortunately makes it unusable most of the time. Resort to VS Code instead. But please fix this as it's a much nicer way of working with xcode projects!
Any news on this?
Any news on this?
waiting for the server team to give more explanation on this error.
I find myself in the same situation, I keep getting the "Oops, the token limit exceeded. Try to shorten your prompt or start a new conversation."
The problem is that it keeps using premium tokens and doing nothing for them except throwing that error.
This happens on both standard and premium models.
Here's the log:
@paulwiderman This isn’t a bug but expected behavior. It typically means your current chat history has grown too long and exceeded the token limit. Opening a new conversation should fix the problem.
@testforstephen I don't know if it's a bug, but it's certainly unexpected and undesired behavior. It also often occurs with quite new conversations. OTOH, sometimes conversations go on for quite a while before it occurs. It doesn't occur with other tools like opencode or Copilot in Visual Studio Code. The situation is bad enough that I actually use Visual Studio Code instead of Xcode for some Swift code development just for the better AI Agent experience.
@vernonstinebaker what I'm talking about is related to "Oops, the token limit exceeded. Try to shorten your prompt or start a new conversation." This is a different issue with "request is too large".
@testforstephen if it's expected behaviour, then why it's only happening in the Xcode implementation of copilot? Opencode and VSCode all work perfectly with chats that span weeks of project directives.
The problem is that it consumes premium tokens when the output is error, shouldn't do that, instead instruct that a new chat is required to continue the work or somethings of the like.
The problem is that it consumes premium tokens when the output is error, shouldn't do that, instead instruct that a new chat is required to continue the work or somethings of the like.
That's not even necessary, just write continue until Copilot is finished...
@testforstephen if it's expected behaviour, then why it's only happening in the Xcode implementation of copilot? Opencode and VSCode all work perfectly with chats that span weeks of project directives.
The problem is that it consumes premium tokens when the output is error, shouldn't do that, instead instruct that a new chat is required to continue the work or somethings of the like.
@paulwiderman VS Code Copilot first summarizes the chat history and then uses that summary as context when it detects that the history turn tokens reach a budget. This token-usage optimization is still a work in progress in Xcode and not available yet.