Google Gemini Embeddings error (rate limit?)
Bug Description
Inserting a large number of chunks (250+) into a Vector Store (Qdrant in my case) using Google Gemini Embeddings models/text-embedding-004 results in a 400 error at some point.
When failing the embeddings model has returned a vector with size 0 instead of 768.
My hunch is that this is due to hitting the Gemini API rate limit of 1500 RPM but I can't confirm this as it's not the n8n error.
Running the same node with an OpenAI or Ollama embedding model works fine.
n8n | 2024-11-23T07:26:23.206Z | error | 400 Bad Request: Wrong input: Vector dimension error: expected dim: 768, got 0
n8n | Error: 400 Bad Request: Wrong input: Vector dimension error: expected dim: 768, got 0
n8n | at QdrantVectorStore.addVectors (/usr/local/lib/node_modules/n8n/node_modules/@langchain/qdrant/dist/vectorstores.cjs:119:27)
n8n | at processTicksAndRejections (node:internal/process/task_queues:95:5)
n8n | at QdrantVectorStore.addDocuments (/usr/local/lib/node_modules/n8n/node_modules/@langchain/qdrant/dist/vectorstores.cjs:86:9)
n8n | at Function.fromDocuments (/usr/local/lib/node_modules/n8n/node_modules/@langchain/qdrant/dist/vectorstores.cjs:248:13)
n8n | at Object.populateVectorStore (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/dist/nodes/vector_store/VectorStoreQdrant/VectorStoreQdrant.node.js:103:9)
n8n | at Object.execute (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/dist/nodes/vector_store/shared/createVectorStoreNode.js:206:21)
n8n | at Workflow.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-workflow/dist/Workflow.js:722:19)
n8n | at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:704:51
n8n | at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:1134:20
n8n | {"file":"LoggerProxy.js","function":"exports.error"}
n8n | 2024-11-23T07:26:23.207Z | debug | Running node "Insert Docs" finished with error {"node":"Insert Docs","workflowId":"GKV924P66vvI83cJ","file":"LoggerProxy.js","function":"exports.debug"}
n8n | 2024-11-23T07:26:23.209Z | debug | Executing hook on node "Insert Docs" (hookFunctionsPush) {"executionId":"2241","pushRef":"wygobx1dkj","workflowId":"GKV924P66vvI83cJ","file":"workflow-execute-additional-data.js","function":"nodeExecuteAfter"}
n8n | 2024-11-23T07:26:23.209Z | debug | Send data of type "nodeExecuteAfter" to editor-UI {"dataType":"nodeExecuteAfter","pushRefs":"wygobx1dkj","file":"abstract.push.js","function":"sendTo"}
n8n | 2024-11-23T07:26:23.210Z | debug | Workflow execution finished with error {"error":{"message":"400 Bad Request: Wrong input: Vector dimension error: expected dim: 768, got 0","stack":"Error: 400 Bad Request: Wrong input: Vector dimension error: expected dim: 768, got 0\n at QdrantVectorStore.addVectors (/usr/local/lib/node_modules/n8n/node_modules/@langchain/qdrant/dist/vectorstores.cjs:119:27)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at QdrantVectorStore.addDocuments (/usr/local/lib/node_modules/n8n/node_modules/@langchain/qdrant/dist/vectorstores.cjs:86:9)\n at Function.fromDocuments (/usr/local/lib/node_modules/n8n/node_modules/@langchain/qdrant/dist/vectorstores.cjs:248:13)\n at Object.populateVectorStore (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/dist/nodes/vector_store/VectorStoreQdrant/VectorStoreQdrant.node.js:103:9)\n at Object.execute (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/dist/nodes/vector_store/shared/createVectorStoreNode.js:206:21)\n at Workflow.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-workflow/dist/Workflow.js:722:19)\n at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:704:51\n at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:1134:20"},"workflowId":"GKV924P66vvI83cJ","file":"LoggerProxy.js","function":"exports.debug"}
To Reproduce
- Have a file with many (small) chucks
- Define a vector store node
- Set the embeddings model to Google Gemini Embeddings
models/text-embedding-004 - Run the workflow
Expected behavior
- Embedding many chucks using Google shouldn't result in an error.
- Or a clear error message that we've hit the rate limit (or the reason why the model has returned a vector with the size 0)
Operating System
Ubuntu Linux 24.04
n8n Version
1.68.0
Node.js Version
20.18.0
Database
SQLite (default)
Execution mode
main (default)
Hey @flatpackfan,
We have created an internal ticket to look into this which we will be tracking as "GHC-483"
I am running into exactly the same error, the node returns success with empty response which causes an error, in this case the pgvector store.
It does not matter which google embedding model I am using.
So far the 250 in my exections are bound to the new limit of 250 https://cloud.google.com/vertex-ai/generative-ai/docs/quotas
When I try to embed a large document, it gets split into many chunks, often hitting Gemini’s rate limit. Would it be possible to add a batching option similar to what the HTTP Request node offers?
can we have option to choose the vector dimension like in the openAI https://github.com/n8n-io/n8n/pull/11773
got same error:
Error: 400 Bad Request: Wrong input: Vector dimension error: expected dim: 3072, got 0 at QdrantVectorStore.addVectors (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/@langchain/qdrant/dist/vectorstores.cjs:119:27) at processTicksAndRejections (node:internal/process/task_queues:105:5) at QdrantVectorStore.addDocuments (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/@langchain/qdrant/dist/vectorstores.cjs:86:9) at Function.fromDocuments (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected][email protected][email protected]/node_modules/@langchain/qdrant/dist/vectorstores.cjs:281:13) at Object.populateVectorStore (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_90fd06b925ebd5b6cf3e2451e17cc4b6/node_modules/@n8n/n8n-nodes-langchain/nodes/vector_store/VectorStoreQdrant/VectorStoreQdrant.node.ts:139:3) at handleInsertOperation (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_90fd06b925ebd5b6cf3e2451e17cc4b6/node_modules/@n8n/n8n-nodes-langchain/nodes/vector_store/shared/createVectorStoreNode/operations/insertOperation.ts:73:4) at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_90fd06b925ebd5b6cf3e2451e17cc4b6/node_modules/@n8n/n8n-nodes-langchain/nodes/vector_store/shared/createVectorStoreNode/createVectorStoreNode.ts:286:24) at WorkflowExecute.runNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@[email protected][email protected][email protected]_/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1187:9) at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@[email protected][email protected][email protected]_/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1536:27 at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@[email protected][email protected][email protected]_/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:2100:11
Hi,
Thank you for reaching out! It looks like this issue might be more of a support request rather than a bug report. For general support, troubleshooting, or questions, we recommend raising your request on our community support forum.
The forum is a great place to connect with both our team and other community members who can help out. You might even find answers to similar questions there!
For issues related to n8n cloud billing or instances going offline please email [email protected] as we are not able to access your cloud account or workflows from these issues.
We’ll go ahead and close this issue to keep the tracker focused on bug reports. Feel free to join the conversation on the forum, and let us know if you have any further concerns or questions.
Thanks again for being part of the n8n community!
Best regards, n8n Team