Execution failed: terminated multiple times
Describe the bug
Execution failed and terminated multiple times in a session.
Affected version
0.0.351
Steps to reproduce the behavior
Make prompts
Expected behavior
Agent should continue iterating on task
Additional context
MacOS m4 pro
Can you share any more details about what happened leading up to this? Any screenshots of the CLI leading up to this. We might need session debug logs to figure out what happened here. There is a flag you can use to set log level to debug, and then look for logs in ~/copilot/logs
I am experiencing the same issue as well. Here is my log output:
2025-10-26T03:56:35.855Z [END-GROUP] 2025-10-26T03:56:35.855Z [DEBUG] Tool calls count: 2 2025-10-26T03:56:35.855Z [DEBUG] Running tool calls in parallel 2025-10-26T03:56:36.238Z [DEBUG] InitiatorHeaderProcessor: Setting X-Initiator to 'agent' 2025-10-26T03:56:36.238Z [START-GROUP] Sending request to the AI model 2025-10-26T03:56:39.383Z [INFO] [log_34296b, x-request-id: "00000-0cf4a990-6d6d-4211-92b1-b5b9104d0757"] post https://api.individual.githubcopilot.com/chat/completions succeeded with status 200 in 3145ms
2025-10-26T03:57:15.071Z [ERROR] error 2025-10-26T03:57:15.071Z [ERROR] { "cause": {} } 2025-10-26T03:57:15.071Z [END-GROUP] 2025-10-26T03:57:15.072Z [ERROR] Command threw error: terminated Error: terminated at t.FTt (file:///Users/xxxx/.nvm/versions/node/v22.14.0/lib/node_modules/@github/copilot/index.js:1888:4512) at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
Additionally, I have occasionally seen similar interruptions when using the Copilot extensions in VS Code and IntelliJ IDEA (also using Claude as the model). By contrast, when I run the same tasks in Cursor or the Claude Code CLI, I haven’t encountered this issue so far.
Initially I thought this might simply be due to some limitations of the IDE integrations compared with Cursor or the CLI. However, since I also noticed the same behavior in Copilot CLI, it makes me wonder if it could perhaps be related to how the context is managed, though I’m not certain.
Can you share any more details about what happened leading up to this? Any screenshots of the CLI leading up to this. We might need session debug logs to figure out what happened here. There is a flag you can use to set log level to debug, and then look for logs in ~/copilot/logs
I'll make sure to grab it next time
2025-11-02T08:01:34.869Z [DEBUG] InitiatorHeaderProcessor: Setting X-Initiator to 'user' 2025-11-02T08:01:34.869Z [START-GROUP] Sending request to the AI model 2025-11-02T08:01:40.451Z [INFO] [log_85cd17, x-request-id: "00000-050364fb-270f-487a-ba69-2ed0ecec9ba6"] post https://api.individual.githubcopilot.com/chat/completions succeeded with status 200 in 5580ms
2025-11-02T08:02:35.370Z [ERROR] error 2025-11-02T08:02:35.370Z [ERROR] { "cause": {} } 2025-11-02T08:02:35.370Z [END-GROUP] 2025-11-02T08:02:35.370Z [ERROR] Command threw error: terminated Error: terminated at t.CTt (file:///Users/xxx/.nvm/versions/node/v22.14.0/lib/node_modules/@github/copilot/index.js:1888:4512) at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
Any updates on this issue? I’m encountering it frequently and it’s affecting my workflow. Is there an ETA or a recommended workaround?
I was trying out the Copilot CLI for the first time and quickly ran into the same thing. I'm using gpt-5-mini as my model, my logs are essentially identical to above but with my paths.
It happens when the context window fills up and I get a notice "Truncated".
I think there are two problems at play that both could be addressed for better readability. One would be some kind of method of determining if the current task is going to overflow the available context window. The "dumb" way of doing it would be a character count of each file it intends to edit and see what the full size is going to be and then if it wont work either fail up front with a reason OR better use logic to handle each file one at a time and then purge that file from the context window.
I guess I don't know that's the problem but it seems like that's the problem to me.