node-chatgpt-api
node-chatgpt-api copied to clipboard
UND_ERR_SOCKET (There was an error communicating with ChatGPT)
using reverse proxy, some questions throw an error
TypeError: fetch failed
at Object.fetch (node:internal/deps/undici/undici:14152:11)
at async ChatGPTClient.getCompletion (file:///C:/Users/JerryWn/AppData/Local/Programs/node-v18.14.0-win-x64/node_modules/@waylaidwanderer/chatgpt-api/src/ChatGPTClient.js:138:26)
at async ChatGPTClient.sendMessage (file:///C:/Users/JerryWn/AppData/Local/Programs/node-v18.14.0-win-x64/node_modules/@waylaidwanderer/chatgpt-api/src/ChatGPTClient.js:196:28)
at async Object.<anonymous> (file:///C:/Users/JerryWn/AppData/Local/Programs/node-v18.14.0-win-x64/node_modules/@waylaidwanderer/chatgpt-api/bin/server.js:100:18) {
cause: SocketError: other side closed
at TLSSocket.onSocketEnd (node:internal/deps/undici/undici:9227:26)
at TLSSocket.emit (node:events:525:35)
at endReadableNT (node:internal/streams/readable:1359:12)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
code: 'UND_ERR_SOCKET',
socket: {
localAddress: '198.18.0.1',
localPort: 58538,
remoteAddress: '198.18.0.10',
remotePort: 443,
remoteFamily: 'IPv4',
timeout: undefined,
bytesWritten: 1764,
bytesRead: 0
}
}
}
I can't really do anything about that, but if it's reproducible, give me some examples and I'll let @PawanOsmon know.
need some more test
tested that this message always causes the error: "Using python, how to create a telegram bot which have an echo command?" here is the details
settings.js:
module.exports = {
openaiApiKey: process.env.OPENAI_API_KEY || 'my_key',
chatGptClient: {
reverseProxyUrl: 'https://chatgpt.pawan.krd/api/completions',
modelOptions: {
model: 'text-davinci-002-render',
},
debug: false,
},
cacheOptions: {},
bingAiClient: {
userToken: '',
debug: false,
},
apiOptions: {
port: process.env.API_PORT || 3000,
host: process.env.API_HOST || 'localhost',
debug: false,
},
storageFilePath: process.env.STORAGE_FILE_PATH || './cache.json',
};
chat.py:
import requests
URL = "http://localhost:3000/conversation"
response = requests.post(url=URL, json={
"message": "hi"
})
print(response.text)
response = requests.post(url=URL, json={
"message": "Using python, how to create a telegram bot which have an echo command?"
})
print(response.text)
console output:
{"response":"Hello! How can I help you today?","conversationId":"26a37b95-013d-4c5d-95bf-db02a559568f","messageId":"bb3716a1-8aea-4449-a19e-17e9b7e0e050"}
{"error":"There was an error communicating with ChatGPT."}
chatgpt-api output:
TypeError: fetch failed
at Object.fetch (node:internal/deps/undici/undici:14152:11)
at async ChatGPTClient.getCompletion (file:///C:/Users/JerryWn/AppData/Local/Programs/node-v18.14.0-win-x64/node_modules/@waylaidwanderer/chatgpt-api/src/ChatGPTClient.js:138:26)
at async ChatGPTClient.sendMessage (file:///C:/Users/JerryWn/AppData/Local/Programs/node-v18.14.0-win-x64/node_modules/@waylaidwanderer/chatgpt-api/src/ChatGPTClient.js:196:28)
at async Object.<anonymous> (file:///C:/Users/JerryWn/AppData/Local/Programs/node-v18.14.0-win-x64/node_modules/@waylaidwanderer/chatgpt-api/bin/server.js:100:18) {
cause: SocketError: other side closed
at TLSSocket.onSocketEnd (node:internal/deps/undici/undici:9227:26)
at TLSSocket.emit (node:events:525:35)
at endReadableNT (node:internal/streams/readable:1359:12)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
code: 'UND_ERR_SOCKET',
socket: {
localAddress: '198.18.0.1',
localPort: 52584,
remoteAddress: '198.18.0.10',
remotePort: 443,
remoteFamily: 'IPv4',
timeout: undefined,
bytesWritten: 3530,
bytesRead: 1231
}
}
}
noticed that the time interval between two response's output is always about 30 seconds
@JerryWn12 I made my own reverse proxy server you can test with and see if you're having the same issue. Let me know how it goes: https://restless-dew-9556.fly.dev/completions
This is a temporary URL for testing just FYI.
@waylaidwanderer it same as pawan's server🥲returns error
Anyway, thanks for creating the reverse proxy server❤️
@JerryWn12 I just tried it, and it works fine for me. Can you try updating to the latest version of my library?
Also, what model
did you set? Is it text-davinci-002-render
or something else?
export default {
openaiApiKey: 'access token',
chatGptClient: {
reverseProxyUrl: 'https://restless-dew-9556.fly.dev/completions',
// rest of the options
}
// rest of the settings
}
curl --request POST \
--url http://localhost:3000/conversation \
--header 'Content-Type: application/json' \
--data '{
"message": "Using python, how to create a telegram bot which have an echo command?"
}'
{
"response": "Here is a basic example of how to create a Telegram bot using the python-telegram-bot library:\n\n```python\nimport telegram\nfrom telegram.ext import Updater, CommandHandler\n\ndef start(bot, update):\n bot.send_message(chat_id=update.message.chat_id, text=\"I'm a bot, please talk to me!\")\n\ndef echo(bot, update):\n bot.send_message(chat_id=update.message.chat_id, text=update.message.text)\n\ntoken = \"your_telegram_bot_token_here\"\n\nupdater = Updater(token)\ndispatcher = updater.dispatcher\n\necho_handler = CommandHandler(\"echo\", echo)\ndispatcher.add_handler(echo_handler)\n\nstart_handler = CommandHandler(\"start\", start)\ndispatcher.add_handler(start_handler)\n\nupdater.start_polling()\n```\n\nIn this example, the `echo` function simply sends back the text of the incoming message. The `start` function sends a greeting message. You can customize these functions to add more functionality to your bot.\n\nTo run this code, you need to install the python-telegram-bot library using pip:\n\n```\npip install python-telegram-bot\n```\n\nReplace `your_telegram_bot_token_here` with the token you got from BotFather when you created your bot.",
"conversationId": "516b13b8-1cbe-44c8-93e5-85da65fc362e",
"messageId": "4ec34fc7-dfc4-40a6-836d-8d1bb754b0d5"
}
🥲still the same, if it works fine on you, i need to troubleshoot my network problem, thank you!
@JerryWn12 I was able to reproduce the issue with the free ChatGPT model. It happens when the response takes over 60s to complete.
@JerryWn12 I was able to reproduce the issue with the free ChatGPT model. It happens when the response takes over 60s to complete.
Using chatgpt-cli it works fine too, I think there is a way to fix that. *Using model text-davinci-002-render
Yeah, I'm investigating and seeing if I can fix it on my end.
@JerryWn12 unfortunately it looks like as per Fly.io, this is a limitation of their service. If the request is longer than 60s, it will be terminated. As a workaround, you can set "stream": true
and consume it that way instead.
I imagine this is a similar issue with Pawan's API.
ok, thanks for your help!