auto-commit icon indicating copy to clipboard operation
auto-commit copied to clipboard

Long files cause thread panic

Open logkirk opened this issue 2 years ago • 0 comments

I understand this is a limitation of OpenAI, but I wonder if it's possible to work around (e.g., send an abridged copy). If not, I think it would be a good idea to fail gracefully and maybe make a note in the README.

Thanks for your work!

(venv) C:\[redacted] [feature/[redacted] ↑1 +0 ~3 -0 | +0 ~1 -0 !]> auto-commit --verbose
Loading Data...
 🧑⚽️       🧑  Analyzing Codebase...[2023-02-22T00:06:16Z DEBUG openai_api] Request: Request { method: Post, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("api.openai.com")), port: None, path: "/v1/engines/code-davinci-002/completions", query: None, fragme
nt: None }, headers: {"content-type": "application/json"}, version: None, body: Body { reader: "<hidden>", length: Some(50550), bytes_read: 0 }, local_addr: None, peer_addr: None, ext: Extensions, trailers_sender: Some(Sender { .. }), trailers_receiver: Some(Receiver { .. }), has_trailers: false }
[2023-02-22T00:06:16Z DEBUG hyper::client::connect::dns] resolving host="api.openai.com"
[2023-02-22T00:06:16Z DEBUG hyper::client::connect::http] connecting to 52.152.96.252:443
[2023-02-22T00:06:17Z DEBUG hyper::client::connect::http] connected to 52.152.96.252:443
🧑   ⚽️     🧑  Analyzing Codebase...[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::io] flushed 210 bytes
[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::io] flushed 16384 bytes
[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::io] flushed 16384 bytes
[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::io] flushed 16384 bytes
[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::io] flushed 1398 bytes
🧑       ⚽️🧑   Analyzing Codebase...[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::io] read 722 bytes
[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::io] parsed 11 headers
[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::conn] incoming body is content-length (297 bytes)
[2023-02-22T00:06:17Z DEBUG hyper::proto::h1::conn] incoming body completed
[2023-02-22T00:06:17Z DEBUG openai_api] Response: Response { response: Response { status: BadRequest, headers: {"openai-model": "code-davinci-002", "openai-organization": "user-[redacted]", "access-control-allow-origin": "*", "content-length": "297", "strict-transport-security": "max-age=15724800;
 includeSubDomains", "x-request-id": "[redacted]", "date": "Wed, 22 Feb 2023 00:06:17 GMT", "content-type": "application/json", "connection": "keep-alive", "openai-processing-ms": "262", "openai-version": "2020-10-01"}, version: Some(Http1_1), has_trailers: false, trailers_sender: Some(Sen
der { .. }), trailers_receiver: Some(Receiver { .. }), upgrade_sender: Some(Sender { .. }), upgrade_receiver: Some(Receiver { .. }), has_upgrade: false, body: Body { reader: "<hidden>", length: Some(297), bytes_read: 0 }, ext: Extensions, local_addr: None, peer_addr: None } }
[2023-02-22T00:06:17Z DEBUG hyper::client::pool] pooling idle connection for ("https", api.openai.com)
thread 'main' panicked at 'Couldn't complete prompt.: Api(ErrorMessage { message: "This model's maximum context length is 8001 tokens, however you requested 14731 tokens (12731 in your prompt; 2000 for the completion). Please reduce your prompt; or completion length.", error_type: "invalid_request_error" })', s
rc\main.rs:137:10
stack backtrace:
   0:     0x7ff75a3c197f - <unknown>
   1:     0x7ff75a3e219a - <unknown>
   2:     0x7ff75a3b99e9 - <unknown>
   3:     0x7ff75a3c3f5b - <unknown>
   4:     0x7ff75a3c3bd5 - <unknown>
   5:     0x7ff75a3c4509 - <unknown>
   6:     0x7ff75a3c440d - <unknown>
   7:     0x7ff75a3c25b7 - <unknown>
   8:     0x7ff75a3c40e9 - <unknown>
🧑      ⚽️  🧑  Analyzing Codebase...   9:     0x7ff75a3f4b45 - <unknown>
  10:     0x7ff75a3f4cc3 - <unknown>
  11:     0x7ff75a131bd9 - <unknown>
  12:     0x7ff75a11a0bb - <unknown>
  13:     0x7ff75a127e7d - <unknown>
  14:     0x7ff75a114a39 - <unknown>
  15:     0x7ff75a11d537 - <unknown>
  16:     0x7ff75a133fb6 - <unknown>
  17:     0x7ff75a1241ec - <unknown>
  18:     0x7ff75a3b39eb - <unknown>
  19:     0x7ff75a11d6d7 - <unknown>
  20:     0x7ff75a3e995c - <unknown>
  21:     0x7fff94527614 - BaseThreadInitThunk
  22:     0x7fff955a26a1 - RtlUserThreadStart
🧑     ⚽️   🧑  Analyzing Codebase...

logkirk avatar Feb 22 '23 00:02 logkirk