langchainjs
langchainjs copied to clipboard
Network Error in `ConversationChain`
I'm getting the following axios error when calling my ConversationChain
Error
"Error: Network Error\n at createError (webpack-internal:///(api)/./node_modules/langchain/dist/util/axios-fetch-adapter.js:316:19)\n at getResponse (webpack-internal:///(api)/./node_modules/langchain/dist/util/axios-fetch-adapter.js:197:16)\n at async fetchAdapter (webpack-internal:///(api)/./node_modules/langchain/dist/util/axios-fetch-adapter.js:174:18)"
Implementation
const history = messages.map(({ agent, message }) =>
agent === "ai" ? new AIChatMessage(message) : new HumanChatMessage(message)
);
const memory = new BufferMemory({
memoryKey: "history",
chatHistory: new ChatMessageHistory(history),
returnMessages: true,
});
const llm = new ChatOpenAI({
temperature: 0,
});
const prompt = ChatPromptTemplate.fromPromptMessages([
SystemMessagePromptTemplate.fromTemplate(
DEFAULT_PROMPT_TEMPLATE
),
new MessagesPlaceholder("history"),
HumanMessagePromptTemplate.fromTemplate("{message}"),
]);
const chain = new ConversationChain({
memory,
prompt,
llm,
});
try {
response.status(200).json({
success: true,
data: await chain.call({
message,
}),
agent: "ai",
});
} catch (error) {
console.log(error);
response.status(500).json({ success: false, error });
}
Also trid upgrading to 0.0.52 but still get the same error.
I got the same problem before. It's like your network environment is not available to access api.openai.com . You can give it a try to ping https://api.openai.com. if it failed, maybe you need a vpn
Also running the same request using ky or cURL generates the correct response:
POST: https://api.openai.com/v1/chat/completions
Data:
{
"model":"gpt-3.5-turbo","temperature":0,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"n":1,"stream":false,"messages":[{"role":"system","content":"\\nAssistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\\n"},{"role":"user","content":"Hello"}]
}
@nfcampos any suggestions on this?
Can you tell me more about your environment @homanp ? So that I can try to reproduce this on my end What version of node (if you're using this in node), etc
Can you tell me more about your environment @homanp ? So that I can try to reproduce this on my end What version of node (if you're using this in node), etc
You can run the code locally by cloning this repo: https://github.com/homanp/langchain-ui/
Environment is NextJS, hosted on Vercel.
This happens both locally and on Vercel.
Node version >= 18
I don't see you passing the OPENAI_API_KEY anywhere, might that be the issue? Also, I'd recommend updating to 0.0.52
I don't see you passing the OPENAI_API_KEY anywhere, might that be the issue? Also, I'd recommend updating to 0.0.52
Locally I have it in my .env file. Again, it works sometimes and doesn't work sometimes, fails sporadically.
I'll update to 0.0.52 and see if that makes any difference.
If it only fails sometimes then must be an issue with OpenAI? Their API isn't the most stable
If it only fails sometimes then must be an issue with OpenAI? Their API isn't the most stable
Might be...
I don't see you passing the OPENAI_API_KEY anywhere, might that be the issue? Also, I'd recommend updating to 0.0.52
Upgrading to 0.0.52 made it worse, getting Network Errors on all requests now. Perhaps something with the axios setup and NextJS?
The raw query works each time using cURL.
You can see the changes here: https://github.com/homanp/langchain-ui/blob/c60f1b14a0560e2f9f01379c8ae9003f8185876c/pages/api/v1/chatbots/%5BchatbotId%5D/index.js
I don't see you passing the OPENAI_API_KEY anywhere, might that be the issue? Also, I'd recommend updating to 0.0.52
Upgrading to 0.0.52 made it worse, getting Network Errors on all requests now. Perhaps something with the axios setup and NextJS?
The raw query works each time using cURL.
You can see the changes here: https://github.com/homanp/langchain-ui/blob/c60f1b14a0560e2f9f01379c8ae9003f8185876c/pages/api/v1/chatbots/%5BchatbotId%5D/index.js
I also can send my request with cURL and even Chrome web console. But not work with pure Node fetch request. The Node 18 seems like never run fetch with network proxy (until I use proxychains to force it) in my case. By the way, my computer system is MacOS with M1 chip. I guess it's some kind of issue of Node 18 fetch or M1 ?
If you use a proxy this solution might work for you https://github.com/hwchase17/langchainjs/issues/388#issuecomment-1490421309
I also can send my request with cURL and even Chrome web console. But not work with pure Node fetch request. The Node 18 seems like never run
fetchwith network proxy (until I useproxychainsto force it) in my case. By the way, my computer system is MacOS with M1 chip. I guess it's some kind of issue of Node 18 fetch or M1 ?
In my case, if pass httpsAgent to baseOptions like below (and set adapter to null), it works.
OpenAIChat({
temperature,
openAIApiKey: apiKey
}, {
basePath,
baseOptions: {
httpsAgent: agent,
adapter: null
}
})
But it can't support stream in this way.
If you use a proxy this solution might work for you #388 (comment)
This is the best solution for me.
@nfcampos I moved the api route to NextJS route handlers and everything seems to work fine.
I'm getting the same error concerning axios-fetch-adapter ever since updating langchain to 0.0.53. I don't have this issue in my other project using 0.0.51. Both are using Next.js (both using api routes, not route handlers).
@homanp Do you mind giving some snippets of how you got that to work? I tried a route handler and I was getting a ton of module errors when calling the chain.
I'm getting consistant network errors when using a regular api route. Mostly on embeddings when I am testing locally, but on every request when deployed to fly.
We actually test with NextJs before each release inside this repo, and I can confirm openai requests work in frontend components, api routes in serverless mode and api routes in edge mode. https://github.com/hwchase17/langchainjs/tree/main/test-exports-vercel/src/pages/api
I can also link you to this project which is using both of those (and the author tells me it works both locally and when deployed to vercel and fly.io) https://github.com/PineappleExpress808/lex-gpt/tree/main/pages/api
Do you want to compare your projects to this, and let me know what's different about yours? As it stands I cannot reproduce this issue, therefore can't help
We actually test with NextJs before each release inside this repo, and I can confirm openai requests work in frontend components, api routes in serverless mode and api routes in edge mode. https://github.com/hwchase17/langchainjs/tree/main/test-exports-vercel/src/pages/api
I can also link you to this project which is using both of those (and the author tells me it works both locally and when deployed to vercel and fly.io) https://github.com/PineappleExpress808/lex-gpt/tree/main/pages/api
Do you want to compare your projects to this, and let me know what's different about yours? As it stands I cannot reproduce this issue, therefore can't help
Thanks for this! I think we can close this issue!
@nfcampos Thank you for the link. I've isolated the error to loading pinecone, which fails with
error - node:fs/promises
Module build failed: UnhandledSchemeError: Reading from "node:fs/promises" is not handled by plugins (Unhandled scheme).
Webpack supports "data:" and "file:" URIs by default.
You may need an additional plugin to handle "node:" URIs.
Import trace for requested module:
node:fs/promises
./node_modules/.pnpm/[email protected]_@[email protected]_@[email protected][email protected]_jkokrjsyutt3gmckgig32mu37u/node_modules/langchain/dist/vectorstores/hnswlib.js
./node_modules/.pnpm/[email protected]_@[email protected]_@[email protected][email protected]_jkokrjsyutt3gmckgig32mu37u/node_modules/langchain/dist/vectorstores/index.js
./node_modules/.pnpm/[email protected]_@[email protected]_@[email protected][email protected]_jkokrjsyutt3gmckgig32mu37u/node_modules/langchain/vectorstores.js
To reproduce just load a PineconeStore in hello-edge.ts.
That looks like you are importing from langchain/vectorstores which was deprecated, see https://js.langchain.com/docs/getting-started/install If instead you import it from langchain/vectorstores/pinecone it might work, give it a try and let me know
@nfcampos Thank you for the tip, that fixes the langchain related import errors.
Unrelated but I'll leave a comment in case someone else has the same issue. Looks like I was running into an edge case that comes up when using top level await in an edge function. The following pattern for importing pinecone fails with The Edge Function "pages/api/vectordbqa" must export a default function. Simply moving the init logic inside the handler fixes the issue.
async function initializePinecone() {
// connect to pinecone
const client = new PineconeClient();
await client.init({
apiKey: process.env.PINECONE_API_KEY || "",
environment: process.env.PINECONE_ENVIRONMENT || "",
});
client.projectName = process.env.PINECONE_PROJECT_NAME || "";
return client
}
export const pinecone = await initializePinecone()
@rawsh Facing same issue, I am using vercel edge functions.
error [ErrorWithoutStackTrace: PineconeClient: Error getting project name: Error: A Node.js API is used (process.nextTick) which is not supported in the Edge Runtime.
error - The Edge Function "pages/api/chat" must export a `default` function
null
error - utils/pinecone-client.ts (19:10) @ initPinecone
error - unhandledRejection: Failed to initialize Pinecone Client
17 | } catch (error) {
18 | console.log('error', error);
> 19 | throw new Error('Failed to initialize Pinecone Client');
| ^
20 | }
21 | }
22 |
Pinecone-client.js looks similar as yours
import { PineconeClient } from '@pinecone-database/pinecone';
if (!process.env.PINECONE_ENVIRONMENT || !process.env.PINECONE_API_KEY) {
throw new Error('Pinecone environment or api key vars missing');
}
export async function initPinecone() {
try {
const pinecone = new PineconeClient();
await pinecone.init({
environment: process.env.PINECONE_ENVIRONMENT ?? '',
apiKey: process.env.PINECONE_API_KEY ?? '',
});
return pinecone;
} catch (error) {
console.log('error', error);
throw new Error('Failed to initialize Pinecone Client');
}
}
export const pinecone = await initPinecone();
Error is due to PineconeClient, moving this inside handler function is not helping either.
Can you share code what exactly worked for you?
Importing fetch at the start of the axios-fetch-adapter.js helped fix this for me.
import fetch from "node-fetch";
@aseem2625 Here is a snippet
// pages/api/vectordbqa.ts
export const config = {
runtime: "edge",
};
export default async function handler(req: NextRequest) {
// …
// connect to pinecone
const client = new PineconeClient();
await client.init({
apiKey: env.PINECONE_API_KEY,
environment: env.PINECONE_ENVIRONMENT,
});
client.projectName = env.PINECONE_PROJECT_NAME;
const vectorStore: VectorStore = await PineconeStore.fromExistingIndex(new OpenAIEmbeddings(), {
pineconeIndex: client.Index(env.PINECONE_INDEX)
})
// …
}
I’m doing the streaming similarly to https://github.com/hwchase17/langchainjs/blob/main/test-exports-vercel/src/pages/api/hello-edge.ts
I've found that this has to do with the axios-fetch-adapter:
async function getResponse(request, config) {
let stageOne;
try {
stageOne = await fetch(request);
}
catch (e) {
if (e && e.name === "AbortError") {
return createError("Request aborted", config, "ECONNABORTED", request);
}
if (e && e.name === "TimeoutError") {
return createError("Request timeout", config, "ECONNABORTED", request);
}
return createError("Network Error", config, "ERR_NETWORK", request);
}
// ...
}
Does anyone know why the request is being adapted for fetch?
In any case, from what I can see, fetch() cannot handle the request value as it throws with:
Adapter error TypeError: Only absolute URLs are supported
Anyone know how to fix this?
@mweichert Can you share the full stack trace of the error you're getting? Where are you getting the error? Which environment are you running this on? Which version of langchain?
I'm getting the same ERR_NETWORK on almost every request. Using langchainjs 0.0.70 and calling langchain in NextJS 13 API routes, Node version >= 18. I'm not using proxy. Error happens both locally and when deployed to Vercel. Most of the times error happens when trying to get embedding for a short sentence.
const embeddings = new OpenAIEmbeddings({ openAIApiKey: openAIApiKey })
const embedding = await embeddings.embedQuery('Some short question')
Here is the error I'm getting:
Error: Network Error
at createError (webpack-internal:///(sc_server)/./node_modules/langchain/dist/util/axios-fetch-adapter.js:314:19)
at getResponse (webpack-internal:///(sc_server)/./node_modules/langchain/dist/util/axios-fetch-adapter.js:180:16)
at async fetchAdapter (webpack-internal:///(sc_server)/./node_modules/langchain/dist/util/axios-fetch-adapter.js:156:18)
at async RetryOperation.eval [as _fn] (webpack-internal:///(sc_server)/./node_modules/p-retry/index.js:40:25) {
config: {
transitional: {
silentJSONParsing: true,
forcedJSONParsing: true,
clarifyTimeoutError: false
},
adapter: [AsyncFunction: fetchAdapter],
transformRequest: [ [Function: transformRequest] ],
transformResponse: [ [Function: transformResponse] ],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: {
Accept: 'application/json, text/plain, */*',
'Content-Type': 'application/json',
'User-Agent': 'OpenAI/NodeJS/3.2.1',
Authorization: 'Bearer my_token'
},
method: 'post',
data: '{"model":"text-embedding-ada-002","input":"Some short question"}',
url: 'https://api.openai.com/v1/embeddings'
},
code: 'ERR_NETWORK',
request: Request {
[Symbol(realm)]: { settingsObject: [Object] },
[Symbol(state)]: {
method: 'POST',
localURLsOnly: false,
unsafeRequest: false,
body: [Object],
client: [Object],
reservedClient: null,
replacesClientId: '',
window: 'client',
keepalive: false,
serviceWorkers: 'all',
initiator: '',
destination: '',
priority: null,
origin: 'client',
policyContainer: 'client',
referrer: 'client',
referrerPolicy: '',
mode: 'cors',
useCORSPreflightFlag: false,
credentials: 'same-origin',
useCredentials: false,
cache: 'default',
redirect: 'follow',
integrity: '',
cryptoGraphicsNonceMetadata: '',
parserMetadata: '',
reloadNavigation: false,
historyNavigation: false,
userActivation: false,
taintedOrigin: false,
redirectCount: 0,
responseTainting: 'basic',
preventNoCacheCacheControlHeaderModification: false,
done: false,
timingAllowFailed: false,
headersList: [HeadersList],
urlList: [Array],
url: [URL]
},
[Symbol(signal)]: AbortSignal { aborted: false },
[Symbol(headers)]: HeadersList {
cookies: null,
[Symbol(headers map)]: [Map],
[Symbol(headers map sorted)]: null
}
},
response: undefined,
isAxiosError: true,
toJSON: [Function: toJSON],
attemptNumber: 7,
retriesLeft: 0
}
PS: openAiKey is present and I replaced it with "my_token" in the text
@avalanche-tm As written in a previous comment https://github.com/hwchase17/langchainjs/issues/739#issuecomment-1504658534
We actually test with NextJs before each release inside this repo, and I can confirm openai requests work in frontend components, api routes in serverless mode and api routes in edge mode. https://github.com/hwchase17/langchainjs/tree/main/test-exports-vercel/src/pages/api
I can also link you to this project which is using both of those (and the author tells me it works both locally and when deployed to vercel and fly.io) https://github.com/PineappleExpress808/lex-gpt/tree/main/pages/api
Do you want to compare your projects to this, and let me know what's different about yours? As it stands I cannot reproduce this issue, therefore can't help