PentestGPT icon indicating copy to clipboard operation
PentestGPT copied to clipboard

Program crashes when it reaches the token limit.

Open 00-Python opened this issue 1 year ago • 4 comments

Describe the bug Every time i reach the token limit it crashes. is there any way round this?

To Reproduce Steps to reproduce the behavior:

  1. pentestgpt --reasoning_model=gpt-3.5-turbo --useAPI
  2. Talk about stuff
  3. See error

Version Im using api 3.5-turbo, and it is pentestgpt-0.8.0

Full Error Message

in 4577 tokens. Please reduce the length of the messages.
Exception details are below. You may submit an issue on github and paste the error trace
Traceback (most recent call last):
  File "/home/zerozero/.local/lib/python3.11/site-packages/pentestgpt/utils/pentest_gpt.py", line 648, in main
    result = self.input_handler()
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/pentestgpt/utils/pentest_gpt.py", line 517, in input_handler
    response = self.reasoning_handler(self.prompts.discussion + user_input)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/pentestgpt/utils/pentest_gpt.py", line 228, in reasoning_handler
    response = self.chatGPT4Agent.send_message(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/pentestgpt/utils/chatgpt_api.py", line 186, in send_message
    response = self.chatgpt_completion(chat_message)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/pentestgpt/utils/chatgpt_api.py", line 86, in chatgpt_completion
    response = openai.ChatCompletion.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/openai/api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/openai/api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "/home/zerozero/.local/lib/python3.11/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4577 tokens. Please reduce the length of the messages.

00-Python avatar Jun 15 '23 00:06 00-Python

Nice catch! I'll implement a bug fix for this.

GreyDGL avatar Jun 15 '23 01:06 GreyDGL

I also having the same issue

ChrisNetEngineer avatar Jun 15 '23 08:06 ChrisNetEngineer

Added some mitigations in the latest commit. Will try to find a more consistent way of token compression.

GreyDGL avatar Jun 18 '23 15:06 GreyDGL

Awesome. I encountered this issue previously despite the recent commits. I haven't dug into source much but could we add a try/except block with a return function true/false to test if an adequate response came through? PREV:: Opening a new session with the previous log file did not matter as a new session was created and the log file appended to the selected file. I could take a look and add a mitigation to a pull request if required. :)

SATUNIX avatar Sep 16 '23 11:09 SATUNIX