continue
continue copied to clipboard
Gemini models in agent mode giving error 400 invalid argument
Before submitting your bug report
- [x] I believe this is a bug. I'll try to join the Continue Discord for questions
- [x] I'm not able to find an open issue that reports the same bug
- [x] I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS: Windows 11
- Continue version: 1.1.45 - 1.1.47(.vsix from github main)
- IDE version:1.100.2
- Model:Gemini-flash-2.0, Gemini-flash-2.5-05-20, Gemini-Pro-2.5-preview. issue persists across aistudio/vertex provider & openrouter provider
- config:
# A name and version for your configuration
name: shamanic-config
version: 0.0.1
schema: v1
openrouter_defaults: &openrouter_defaults
provider: openrouter
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
rules:
- You are an expert software developer. You give helpful and concise responses.
models:
- name: OpenRouter LLaMA 70 8B
provider: openrouter
model: meta-llama/llama-3-70b-instruct
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply] # 'models' from JSON get the chat/edit/apply roles by default
- name: Claude 3.5 Sonnet
provider: openrouter
model: anthropic/claude-3.5-sonnet-latest
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: Anthropic Claude 3.5 Sonnet Beta
provider: openrouter
model: anthropic/claude-3.5-sonnet:beta
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: DeepSeek Chat
provider: openrouter
model: deepseek/deepseek-chat
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: DeepSeek R1 0528
provider: openrouter
model: deepseek/deepseek-r1-0528
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: Qwen 2.5 Coder 32B Instruct
provider: openrouter
model: qwen/qwen-2.5-coder-32b-instruct
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: MistralAI Codestral 2501
provider: openrouter
model: mistralai/codestral-2501
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: Gemini 2.5 Pro Preview 05-06
provider: openrouter
model: google/gemini-2.5-pro-preview-05-06
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: Gemini 2.5 Pro Preview
provider: openrouter
model: google/gemini-2.5-pro-preview
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: Gemini 2.0 Flash
provider: gemini
model: gemini-2.0-flash
apiKey: ${{ secrets.GEMINI_API_KEY }}
contextLength: 1000000
roles: [chat, edit, apply]
- name: Gemini 2.5 Flash 05-20
provider: gemini
model: gemini-2.5-flash-05-20
apiKey: ${{ secrets.GEMINI_API_KEY }}
contextLength: 1000000
roles: [chat, edit, apply]
# Your 'tabAutocompleteModel' is now here with the 'autocomplete' role
- name: Groq — Qwen-qwq-32b (Autocomplete-Testing)
apiBase: https://api.groq.com/openai/v1/
apiVersion: "1"
provider: groq
model: qwen/qwen3-32b
apiKey: ${{ secrets.GROQ_API_KEY }}
# Your 'embeddingsProvider' is now here with the 'embed' role
- name: Ollama Embeddings
provider: ollama
model: mxbai-embed-large:latest
apiBase: http://localhost:11434/v1
roles: [embed]
prompts:
- name: test
description: Write unit tests for highlighted code
prompt: |
{{{ input }}}
Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.
context:
- provider: code
- provider: docs
- provider: diff
- provider: terminal
- provider: problems
- provider: folder
- provider: codebase
mcpServers:
- name: context7
command: context7-mcp
args: []
- name: n8n-test
command: mcp-remote
args:
- http://localhost:5678/mcp-test/97624e46-aeee-4886-8682-209667591bc2/sse
- name: MCP_DOCKER
command: docker
args:
- run
- -l
- mcp.client=continue
- --rm
- -i
- alpine/socat
- STDIO
- TCP:host.docker.internal:8811
OR link to assistant in Continue hub:
Description
The models work fine in chat mode, but attempting to call a gemini model from agent mode gives this error when called from aistudio/vertex:
"[{\n \"error\": {\n \"code\": 400,\n \"message\": \"* GenerateContentRequest.tools[0].function_declarations[10].parameters.required[2]: property is not defined\\n\",\n \"status\": \"INVALID_ARGUMENT\"\n }\n}\n]"
& this error when called via openrouter:
400 Provider returned error
the issue persists across latest build & previously working builds. the error first appeared after the global outage on 12th June so I believe it may be related to some change on googles end
have not had the chance to test this on another system so there is a chance this is local to my machine
To reproduce
https://github.com/user-attachments/assets/7a2ebdea-5d38-4d60-8935-8e5181bc9210
- Select a google gemini model from one of the listed providers
- select agent mode & send message
- observe error message
Log output
[Extension Host] Error: 400 Provider returned error
at Function.generate (c:\Users\Lucas\.vscode\extensions\continue.continue-1.1.47\out\extension.js:109455:18)
at OpenAI.makeStatusError (c:\Users\Lucas\.vscode\extensions\continue.continue-1.1.47\out\extension.js:110366:25)
at OpenAI.makeRequest (c:\Users\Lucas\.vscode\extensions\continue.continue-1.1.47\out\extension.js:110410:29)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at OpenAIApi.chatCompletionStream (c:\Users\Lucas\.vscode\extensions\continue.continue-1.1.47\out\extension.js:114472:26)
at OpenRouter.streamChat (c:\Users\Lucas\.vscode\extensions\continue.continue-1.1.47\out\extension.js:140909:34)
at llmStreamChat (c:\Users\Lucas\.vscode\extensions\continue.continue-1.1.47\out\extension.js:650110:17)
at ed.handleMessage [as value] (c:\Users\Lucas\.vscode\extensions\continue.continue-1.1.47\out\extension.js:668163:29)
log.ts:460 ERR [Extension Host] Error handling webview message: {
"msg": {
"messageId": "8d07eb5c-929f-436d-881f-655e853b0faa",
"messageType": "llm/streamChat",
"data": {
"completionOptions": {
"tools": [