gpt4-pdf-chatbot-langchain icon indicating copy to clipboard operation
gpt4-pdf-chatbot-langchain copied to clipboard

404 error

Open sdugoten opened this issue 1 year ago • 9 comments

wait - compiling /api/chat (client and server)... event - compiled successfully in 45 ms (75 modules) us-central1-gcp error [Error: Request failed with status code 404] { config: { transitional: { silentJSONParsing: true, forcedJSONParsing: true, clarifyTimeoutError: false }, adapter: [Function: httpAdapter], transformRequest: [ [Function: transformRequest] ], transformResponse: [ [Function: transformResponse] ], timeout: 0, xsrfCookieName: 'XSRF-TOKEN', xsrfHeaderName: 'X-XSRF-TOKEN', maxContentLength: -1, maxBodyLength: -1, validateStatus: [Function: validateStatus], headers: { Accept: 'application/json, text/plain, */*', 'Content-Type': 'application/json', 'User-Agent': 'OpenAI/NodeJS/3.2.1', Authorization: 'Bearer sk-dTyvu35Xwgzxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'Content-Length': 2330 }, method: 'post', responseType: 'stream', data:{"model":"gpt-3.5","temperature":0,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"n":1,"stream":true,"messages":[{"role":"user","content":"You are an AI assistant providing helpful advice. You are given the following extracted parts of a long document and a question. Provide a conversational answer based on the context provided.\nYou should only provide hyperlinks that reference the context below. Do NOT make up hyperlinks.\nIf you can't find the answer in the context below, just say \"Hmm, I'm not sure.\" Don't try to make up an answer.\nIf the question is not related to the context, politely respond that you are tuned to only answer questions that are related to the context.\n\nQuestion: what is the case about?\n=========\nhouse. Yet no one wishes to substitute courts for school \nboards, or to turn the judge’s chambers into the principal’s \noffice. \nIn order to avoid resolving the fractious underlying consti­\ntutional question, we need only decide a different question \nthat this case presents, the question of “qualified immunity.” \nSee Pet. for Cert. 23–28. The principle of qualified immu­\nnity fits this case perfectly and, by saying so, we would di­\nminish the risk of bringing about the adverse consequences \nI have identified. More importantly, we should also adhere \nto a basic constitutional obligation by avoiding unnecessary \ndecision of constitutional questions. See Ashwander v.\n\nhouse. Yet no one wishes to substitute courts for school \nboards, or to turn the judge’s chambers into the principal’s \noffice. \nIn order to avoid resolving the fractious underlying consti­\ntutional question, we need only decide a different question \nthat this case presents, the question of “qualified immunity.” \nSee Pet. for Cert. 23–28. The principle of qualified immu­\nnity fits this case perfectly and, by saying so, we would di­\nminish the risk of bringing about the adverse consequences \nI have identified. More importantly, we should also adhere \nto a basic constitutional obligation by avoiding unnecessary \ndecision of constitutional questions. See Ashwander v.\n=========\nAnswer in Markdown:"}]}, url: 'https://api.openai.com/v1/chat/completions' }, request: <ref *1> ClientRequest { _events: [Object: null prototype] { abort: [Function (anonymous)], aborted: [Function (anonymous)], connect: [Function (anonymous)], error: [Function (anonymous)], socket: [Function (anonymous)], timeout: [Function (anonymous)], finish: [Function: requestOnFinish] }, _eventsCount: 7, _maxListeners: undefined, outputData: [], outputSize: 0, writable: true, destroyed: false, _last: true, chunkedEncoding: false, shouldKeepAlive: false, maxRequestsOnConnectionReached: false, _defaultKeepAlive: true, useChunkedEncodingByDefault: true, sendDate: false, _removedConnection: false, _removedContLen: false, _removedTE: false, strictContentLength: false, _contentLength: 2330, _hasBody: true, _trailer: '', finished: true, _headerSent: true, _closed: false, socket: TLSSocket { _tlsOptions: [Object], _secureEstablished: true, _securePending: false, _newSessionPending: false, _controlReleased: true, secureConnecting: false, _SNICallback: null, servername: 'api.openai.com', alpnProtocol: false, authorized: true, authorizationError: null, encrypted: true, _events: [Object: null prototype], _eventsCount: 9, connecting: false, _hadError: false, _parent: null, _host: 'api.openai.com', _closeAfterHandlingError: false, _readableState: [ReadableState], _maxListeners: undefined, _writableState: [WritableState], allowHalfOpen: false, _sockname: null, _pendingData: null, _pendingEncoding: '', server: undefined, _server: null, ssl: [TLSWrap], _requestCert: true, _rejectUnauthorized: true, parser: null, _httpMessage: [Circular *1], [Symbol(res)]: [TLSWrap], [Symbol(verified)]: true, [Symbol(pendingSession)]: null, [Symbol(async_id_symbol)]: 22735, [Symbol(kHandle)]: [TLSWrap], [Symbol(lastWriteQueueSize)]: 0, [Symbol(timeout)]: null, [Symbol(kBuffer)]: null, [Symbol(kBufferCb)]: null, [Symbol(kBufferGen)]: null, [Symbol(kCapture)]: false, [Symbol(kSetNoDelay)]: false, [Symbol(kSetKeepAlive)]: true, [Symbol(kSetKeepAliveInitialDelay)]: 60, [Symbol(kBytesRead)]: 0, [Symbol(kBytesWritten)]: 0, [Symbol(connect-options)]: [Object] }, _header: 'POST /v1/chat/completions HTTP/1.1\r\n' + 'Accept: application/json, text/plain, */*\r\n' + 'Content-Type: application/json\r\n' + 'User-Agent: OpenAI/NodeJS/3.2.1\r\n' + 'Authorization: Bearer sk-dTyvu35Xwgxxxxxxxxxxxxxxxxxxxxxxxx\r\n' + 'Content-Length: 2330\r\n' + 'Host: api.openai.com\r\n' + 'Connection: close\r\n' + '\r\n', _keepAliveTimeout: 0, _onPendingData: [Function: nop], agent: Agent { _events: [Object: null prototype], _eventsCount: 2, _maxListeners: undefined, defaultPort: 443, protocol: 'https:', options: [Object: null prototype], requests: [Object: null prototype] {}, sockets: [Object: null prototype], freeSockets: [Object: null prototype] {}, keepAliveMsecs: 1000, keepAlive: false, maxSockets: Infinity, maxFreeSockets: 256, scheduling: 'lifo', maxTotalSockets: Infinity, totalSocketCount: 1, maxCachedSessions: 100, _sessionCache: [Object], [Symbol(kCapture)]: false }, socketPath: undefined, method: 'POST', maxHeaderSize: undefined, insecureHTTPParser: undefined, joinDuplicateHeaders: undefined, path: '/v1/chat/completions', _ended: false, res: IncomingMessage { _readableState: [ReadableState], _events: [Object: null prototype], _eventsCount: 1, _maxListeners: undefined, socket: [TLSSocket], httpVersionMajor: 1, httpVersionMinor: 1, httpVersion: '1.1', complete: true, rawHeaders: [Array], rawTrailers: [], joinDuplicateHeaders: undefined, aborted: false, upgrade: false, url: '', method: null, statusCode: 404, statusMessage: 'Not Found', client: [TLSSocket], _consuming: false, _dumped: false, req: [Circular *1], responseUrl: 'https://api.openai.com/v1/chat/completions', redirects: [], [Symbol(kCapture)]: false, [Symbol(kHeaders)]: [Object], [Symbol(kHeadersCount)]: 14, [Symbol(kTrailers)]: null, [Symbol(kTrailersCount)]: 0 }, aborted: false, timeoutCb: null, upgradeOrConnect: false, parser: null, maxHeadersCount: null, reusedSocket: false, host: 'api.openai.com', protocol: 'https:', _redirectable: Writable { _writableState: [WritableState], _events: [Object: null prototype], _eventsCount: 3, _maxListeners: undefined, _options: [Object], _ended: true, _ending: true, _redirectCount: 0, _redirects: [], _requestBodyLength: 2330, _requestBodyBuffers: [], _onNativeResponse: [Function (anonymous)], _currentRequest: [Circular *1], _currentUrl: 'https://api.openai.com/v1/chat/completions', [Symbol(kCapture)]: false }, [Symbol(kCapture)]: false, [Symbol(kBytesWritten)]: 0, [Symbol(kEndCalled)]: true, [Symbol(kNeedDrain)]: false, [Symbol(corked)]: 0, [Symbol(kOutHeaders)]: [Object: null prototype] { accept: [Array], 'content-type': [Array], 'user-agent': [Array], authorization: [Array], 'content-length': [Array], host: [Array] }, [Symbol(errored)]: null, [Symbol(kUniqueHeaders)]: null }, response: { status: 404, statusText: 'Not Found', headers: { date: 'Mon, 20 Mar 2023 13:02:59 GMT', 'content-type': 'application/json; charset=utf-8', 'content-length': '167', connection: 'close', vary: 'Origin', 'x-request-id': 'a0e919fb2bxxxxxxxxxxxxxxxx', 'strict-transport-security': 'max-age=15724800; includeSubDomains' }, config: { transitional: [Object], adapter: [Function: httpAdapter], transformRequest: [Array], transformResponse: [Array], timeout: 0, xsrfCookieName: 'XSRF-TOKEN', xsrfHeaderName: 'X-XSRF-TOKEN', maxContentLength: -1, maxBodyLength: -1, validateStatus: [Function: validateStatus], headers: [Object], method: 'post', responseType: 'stream', data: {"model":"gpt-3.5","temperature":0,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"n":1,"stream":true,"messages":[{"role":"user","content":"You are an AI assistant providing helpful advice. You are given the following extracted parts of a long document and a question. Provide a conversational answer based on the context provided.\nYou should only provide hyperlinks that reference the context below. Do NOT make up hyperlinks.\nIf you can't find the answer in the context below, just say \"Hmm, I'm not sure.\" Don't try to make up an answer.\nIf the question is not related to the context, politely respond that you are tuned to only answer questions that are related to the context.\n\nQuestion: what is the case about?\n=========\nhouse. Yet no one wishes to substitute courts for school \nboards, or to turn the judge’s chambers into the principal’s \noffice. \nIn order to avoid resolving the fractious underlying consti­\ntutional question, we need only decide a different question \nthat this case presents, the question of “qualified immunity.” \nSee Pet. for Cert. 23–28. The principle of qualified immu­\nnity fits this case perfectly and, by saying so, we would di­\nminish the risk of bringing about the adverse consequences \nI have identified. More importantly, we should also adhere \nto a basic constitutional obligation by avoiding unnecessary \ndecision of constitutional questions. See Ashwander v.\n\nhouse. Yet no one wishes to substitute courts for school \nboards, or to turn the judge’s chambers into the principal’s \noffice. \nIn order to avoid resolving the fractious underlying consti­\ntutional question, we need only decide a different question \nthat this case presents, the question of “qualified immunity.” \nSee Pet. for Cert. 23–28. The principle of qualified immu­\nnity fits this case perfectly and, by saying so, we would di­\nminish the risk of bringing about the adverse consequences \nI have identified. More importantly, we should also adhere \nto a basic constitutional obligation by avoiding unnecessary \ndecision of constitutional questions. See Ashwander v.\n=========\nAnswer in Markdown:"}]}`, url: 'https://api.openai.com/v1/chat/completions' }, request: <ref *1> ClientRequest { _events: [Object: null prototype], _eventsCount: 7, _maxListeners: undefined, outputData: [], outputSize: 0, writable: true, destroyed: false, _last: true, chunkedEncoding: false, shouldKeepAlive: false, maxRequestsOnConnectionReached: false, _defaultKeepAlive: true, useChunkedEncodingByDefault: true, sendDate: false, _removedConnection: false, _removedContLen: false, _removedTE: false, strictContentLength: false, _contentLength: 2330, _hasBody: true, _trailer: '', finished: true, _headerSent: true, _closed: false, socket: [TLSSocket], _header: 'POST /v1/chat/completions HTTP/1.1\r\n' + 'Accept: application/json, text/plain, /\r\n' + 'Content-Type: application/json\r\n' + 'User-Agent: OpenAI/NodeJS/3.2.1\r\n' + 'Authorization: Bearer sk-dxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 'Content-Length: 2330\r\n' + 'Host: api.openai.com\r\n' + 'Connection: close\r\n' + '\r\n', _keepAliveTimeout: 0, _onPendingData: [Function: nop], agent: [Agent], socketPath: undefined, method: 'POST', maxHeaderSize: undefined, insecureHTTPParser: undefined, joinDuplicateHeaders: undefined, path: '/v1/chat/completions', _ended: false, res: [IncomingMessage], aborted: false, timeoutCb: null, upgradeOrConnect: false, parser: null, maxHeadersCount: null, reusedSocket: false, host: 'api.openai.com', protocol: 'https:', _redirectable: [Writable], [Symbol(kCapture)]: false, [Symbol(kBytesWritten)]: 0, [Symbol(kEndCalled)]: true, [Symbol(kNeedDrain)]: false, [Symbol(corked)]: 0, [Symbol(kOutHeaders)]: [Object: null prototype], [Symbol(errored)]: null, [Symbol(kUniqueHeaders)]: null }, data: IncomingMessage { _readableState: [ReadableState], _events: [Object: null prototype], _eventsCount: 1, _maxListeners: undefined, socket: [TLSSocket], httpVersionMajor: 1, httpVersionMinor: 1, httpVersion: '1.1', complete: true, rawHeaders: [Array], rawTrailers: [], joinDuplicateHeaders: undefined, aborted: false, upgrade: false, url: '', method: null, statusCode: 404, statusMessage: 'Not Found', client: [TLSSocket], _consuming: false, _dumped: false, req: [ClientRequest], responseUrl: 'https://api.openai.com/v1/chat/completions', redirects: [], [Symbol(kCapture)]: false, [Symbol(kHeaders)]: [Object], [Symbol(kHeadersCount)]: 14, [Symbol(kTrailers)]: null, [Symbol(kTrailersCount)]: 0 } }, isAxiosError: true, toJSON: [Function: toJSON] }

`

sdugoten avatar Mar 20 '23 13:03 sdugoten

Can you provide more information on the source code behind this error? It appears to be an issue with your API keys.

mayooear avatar Mar 20 '23 17:03 mayooear

Can you provide more information on the source code behind this error? It appears to be an issue with your API keys.

where can I give you more information so that I can help you to troubleshoot? I just follow the instruction from your main page https://github.com/mayooear/gpt4-pdf-chatbot-langchain and run the program.

I basically generate the openAI key in user setting page, and paste it inside .env

The entry looks something like this

OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxohrzgxxxxx6BM

sdugoten avatar Mar 20 '23 18:03 sdugoten

Hey. Thanks for the repo and the work you put in! I have the same error.

  const docChain = loadQAChain(
    new OpenAIChat({
      temperature: 0,
      modelName: 'text-davinci-003', // <--
      streaming: Boolean(onTokenStream),
      callbackManager: onTokenStream
        ? CallbackManager.fromHandlers({
            async handleLLMNewToken(token) {
              onTokenStream(token);
              console.log(token);
            },
          })
        : undefined,
    }),
    { prompt: QA_PROMPT },
  );

This is the only part I changed in the code.


This is the output I get from running curl https://api.openai.com/v1/models -H "Authorization: Bearer XXX"

// ...
    {
      "id": "text-davinci-003",
      "object": "model",
      "created": 1669599635,
      "owned_by": "openai-internal",
      "permission": [
        {
          "id": "modelperm",
          "object": "model_permission",
          "created": 1678585020,
          "allow_create_engine": false,
          "allow_sampling": true,
          "allow_logprobs": true,
          "allow_search_indices": false,
          "allow_view": true,
          "allow_fine_tuning": false,
          "organization": "*",
          "group": null,
          "is_blocking": false
        }
      ],
      "root": "text-davinci-003",
      "parent": null
    },
// ...

These are my pinecone settings. I am not familiar with pinecone, so I copied (or thought that I copied) your values: image

ankri avatar Mar 20 '23 19:03 ankri

You can see the error in action here

https://www.youtube.com/watch?v=s82PalNVJiM

sdugoten avatar Mar 20 '23 19:03 sdugoten

I tried some googling and changed the model to:

modelName: 'gpt-3.5-turbo'

and now it seems to work. @sdugoten - can you try to change the modelName and see if it works for you?

Reference to the docs for a list of supported modelNames

ankri avatar Mar 20 '23 19:03 ankri

I tried some googling and changed the model to:

modelName: 'gpt-3.5-turbo'

and now it seems to work. @sdugoten - can you try to change the modelName and see if it works for you?

Reference to the docs for a list of supported modelNames

ok, it does seems to be the model name, you have to use "gpt-3.5-turbo" instead of "gpt-3.5". I am gonna try my own pdf and see if that works.

sdugoten avatar Mar 20 '23 19:03 sdugoten

@ankri great find here! Appreciate it. Are you running into errors with text-davinci-003 ?

mayooear avatar Mar 20 '23 19:03 mayooear

For future readers: Yes. You are running into problems with text-davinci-003 because the OpenAI wrapper is hitting the https://api.openai.com/v1/chat/completions endpoint.

These are the supported models for this endpoint: image gpt-4, gpt-4-0314, gpt-4-32k, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-0301

Source

ankri avatar Mar 20 '23 20:03 ankri

For future readers: Yes. You are running into problems with text-davinci-003 because the OpenAI wrapper is hitting the https://api.openai.com/v1/chat/completions endpoint.

These are the supported models for this endpoint: image gpt-4, gpt-4-0314, gpt-4-32k, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-0301

Source

Good catch!

mayooear avatar Mar 20 '23 22:03 mayooear