gpt4-pdf-chatbot-langchain
                                
                                 gpt4-pdf-chatbot-langchain copied to clipboard
                                
                                    gpt4-pdf-chatbot-langchain copied to clipboard
                            
                            
                            
                        Pinecone error
I keep getting this error:
error [Error: PineconeClient: Error calling query: Error: PineconeClient: Error calling queryRaw: FetchError: The request failed and the interceptors did not return an alternative response]
when i try to ask a question... ingestion also doesn't work...
I've double checked my env settings as well as pinecone config.
any idea what this could be?
https://github.com/hwchase17/langchainjs/pull/413
I got a similar error and it's because I didn't define the index name. More info from mayo: Hey there are several potential culprits behind this. I cover them https://github.com/mayooear/gpt4-pdf-chatbot-langchain/discussions/6 in the discussions section.
Here are potential causes of the error, I posted below. Try them out and let me know if you still encounter issues.
Troubleshoot the following:
In the config folder, replace the PINECONE_INDEX_NAME to match your index name in pinecone.
Upgrade your node version to the latest. It's possible you're using a version of Node that doesn't support fetch natively.
Make sure to set Dimensions in the Pinecone dashboard to 1536. (These are OpenAI embeddings dimensions).
Switch your Environment in pinecone to us-east1-gcp if the other environment is causing issues.
Ensure you have a .env file in the root that contains valid API keys from the pinecone dashboard.
Pinecone has limits for each upsert operation, you can read them here and see some below. If you are uploading massive PDF files, you just need to write a loop to ensure upserts don't exceed 100 chunks per request. I will make a PR to the LangChain repo to integrate this.
Max size for an upsert request is 2MB. Recommended upsert limit is 100 vectors per request. Max metadata size per vector is 40 KB. Pinecone indexes of users on the Starter(free) plan are deleted after 7 days of inactivity. To prevent this, send an API request to Pinecone to reset the counter.
https://github.com/mayooear/gpt4-pdf-chatbot-langchain/issues/4
did you manage to make that PR, for the 100 chunks per request, I applied all troubleshooting and I still get the same pinecone error
error [Error: PineconeClient: Error calling query: Error: PineconeClient: Error calling queryRaw: FetchError: The request failed and the interceptors did not return an alternative response]
PR is complete, but do not upgrade langchain or pinecone versions above the current repo as there are breaking changes. I have already added a custom chunking function recently to this repo so you shouldn't face those errors if you update your repo with it.
how can we if find query answer using namespace