Bug 400 Bad Request after disapproving an action
Running: kubectl port-forward -n rift-jonatanzafar59 pods/kafka-ui-7d555c68bf-b8r9x 8080:8080
Do you want to proceed ?
1) Yes
2) Yes, and don't ask me again
3) No
Enter your choice (number): 3
Operation was skipped.
E0506 03:28:50.103113 61569 openai.go:249] OpenAI ChatCompletion API error: POST "https://api.openai.com/v1/chat/completions": 400 Bad Request {
"message": "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_a7VHTIPB6SdY4RFug8Gut0aa",
"type": "invalid_request_error",
"param": "messages.[15].role",
"code": null
}
Error: simulated streaming failed during non-streaming call: OpenAI chat completion failed: POST "https://api.openai.com/v1/chat/completions": 400 Bad Request {
"message": "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_a7VHTIPB6SdY4RFug8Gut0aa",
"type": "invalid_request_error",
"param": "messages.[15].role",
"code": null
}
>>> which local port kafka-ui should have? do you know?
E0506 03:29:11.939845 61569 openai.go:249] OpenAI ChatCompletion API error: POST "https://api.openai.com/v1/chat/completions": 400 Bad Request {
"message": "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_a7VHTIPB6SdY4RFug8Gut0aa",
"type": "invalid_request_error",
"param": "messages.[15].role",
"code": null
}
Error: simulated streaming failed during non-streaming call: OpenAI chat completion failed: POST "https://api.openai.com/v1/chat/completions": 400 Bad Request {
"message": "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_a7VHTIPB6SdY4RFug8Gut0aa",
"type": "invalid_request_error",
"param": "messages.[15].role",
"code": null
}
model 4o-mini openAi I need to restart the chat to get out of the error loop
@jonatanzafar59 thanks for reporting this. Need your help in reproducing this.
What is the scenario you were trying in ?
Looping in @tuannvm @hakman who helped with adding openai support.
My guess when user denies the permission to run a tool, we are not adding tool_call response corresponding to this denial and that's causing mis-match in the chat-history leading the failure of subsequent requests.
@jonatanzafar59 if you had gone ahead with option 1), kubectl-ai would have rejected the port-forward request code and would have asked the LLM to try another way. Today we reject kubectl edit and kubectl port-forward requests because they require either interaction or they block. (kubectl logs -f is another candidate not added to the list yet).
TL;DR go ahead with the yes option for now if you want to give another try.
We will fix the main issue soon and yes, I am still interested in the scenario you tried that led LLM to suggest this command.
Thank you.
After digging a bit into your code, I think something about me giving it a long context if tools it is allowed to run confused it a bit? not sure. I added a lot of private context, but just for example:
-
never run tools/scripts other than:
• Allowed commands
* ls (for directory listings) * grep (for searching file contents) * cat (to dump a file) * kubectl (for inspecting pods, logs, etc.) * ps, lsof (process/port checks) * echo, cd (simple shell utilities) * Any invocation of the port-forward script at: bash tools/someproj/port-forward-display.sh• What to avoid _ No
find, no arbitrary scripts, no write or delete operations. nothing!!! rather than allowed commands.
nothing special, just was chatting with it for a few messages before.
FINAL SUMMARY:
Bug Fix: 400 Bad Request After Disapproving an Action
Based on the reported issue #144, I can provide a solution for the 400 Bad Request error that occurs when a user disapproves a tool action (selects "No").
Root Cause
The error occurs because when a user denies permission to run a tool (like kubectl port-forward), the system doesn't properly add a tool_call response for the denial. The OpenAI API expects responses to all tool_call_ids, resulting in the error:
"An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'."
Proposed Fix
- Add explicit tool call response handling:
def handle_tool_call_denial(tool_call_id, reason="User denied permission"):
return {
"role": "tool",
"tool_call_id": tool_call_id,
"content": f"Permission denied: {reason}"
}
- Update the conversation state management:
def process_chat_completion(messages, tool_calls):
# If tool calls exist
if tool_calls:
for tool_call in tool_calls:
# Check if permission was granted
if not user_granted_permission(tool_call):
# Add an explicit denial response
messages.append(
handle_tool_call_denial(tool_call.id)
)
# Continue with chat completion
return send_to_openai_api(messages)
- Improve error handling:
def handle_openai_tool_call_error(error):
# Specific handling for tool call response mismatch
if "tool_calls" in error.message:
# Extract and respond to missing tool calls
missing_tool_call_ids = extract_missing_tool_call_ids(error.message)
for tool_call_id in missing_tool_call_ids:
messages.append(
handle_tool_call_denial(tool_call_id, "Automatic denial to prevent API error")
)
# Retry the chat completion
return retry_chat_completion(messages)
These changes will ensure that when a user denies a tool action, proper responses are still sent to the OpenAI API, preventing the 400 Bad Request error.
I had the same problem, but in my case, the app was running several commands to check the status of an application, and then it gave an error.
E0521 20:58:42.969919 51203 openai.go:456] Error in OpenAI streaming: POST "https://api.openai.com/v1/chat/completions": 400 Bad Request {
"message": "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_",
"type": "invalid_request_error",
"param": "messages.[3].role",
"code": null
}
Thanks @dorancemc . If you can share anything that can help reproduce this, would be great. For example steps you ran. If it is okay, would be great it you can share trace and logs file from /tmp/kubectl*
/cc @tuannvm @zvdy
whoops, did not think about this, i had a few nil checkers in the impl but i need to also change it in here
https://github.com/GoogleCloudPlatform/kubectl-ai/blob/a5ea5cbb413e5d3dfaf1c80ac68ebe1fdcfd3261/pkg/agent/conversation.go#L261-L265
will upload the PR today
@jonatanzafar59 @dorancemc @Shubh10am
This is fixed, feel free to re-open if needed! I made sure to test quite a few scenarios
selecting 3 and re-selecting 3,
3 then 1 and
etc,
mainly tested with openai based, but from my experience, qwen and the rest of openai based llms work, and gemini was tested too!
have a look at the PR if curious, for the time being, ensure you build from source in case you were installing from latest release