AutoGPT icon indicating copy to clipboard operation
AutoGPT copied to clipboard

COMMAND = list_files - openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens

Open katmai opened this issue 1 year ago • 40 comments

⚠️ Search for existing issues first ⚠️

  • [X] I have searched the existing issues, and there is no existing issue for my problem

Which Operating System are you using?

Docker

Which version of Auto-GPT are you using?

Master (branch)

GPT-3 or GPT-4?

GPT-3.5

Steps to reproduce 🕹

listing the auto_gpt_workspace folder errors out. maybe this is an erroneous bug, not really sure, but why is it calling openai when it's merely listing the files in the folder?

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4819 tokens. Please reduce the length of the messages.

Current behavior 😯

listing the folder contents errors out and kills the program if there's too many files in there.

Expected behavior 🤔

not ... error out :D

Your prompt 📝

# Paste your prompt here

Your Logs 📒

NEXT ACTION:  COMMAND = list_files ARGUMENTS = {'directory': '/home/atlas/autogpt/auto_gpt_workspace/atlas_repo'}
SYSTEM:  Command list_files returned: ['atlas_repo/docker-compose.yml', 'atlas_repo/mkdocs.yml', 'atlas_repo/run.bat', 'atlas_repo/run_continuous.bat', 'atlas_repo/requirements.txt', 'atlas_repo/tests.py', 'atlas_repo/CODE_OF_CONDUCT.md', 'atlas_repo/main.py', 'atlas_repo/plugin.png', 'atlas_repo/codecov.yml', 'atlas_repo/CONTRIBUTING.md', 'atlas_repo/BULLETIN.md', 'atlas_repo/run_continuous.sh', 'atlas_repo/LICENSE', 'atlas_repo/pyproject.toml', 'atlas_repo/azure.yaml.template', 'atlas_repo/README.md', 'atlas_repo/data_ingestion.py', 'atlas_repo/run.sh', 'atlas_repo/Dockerfile', 'atlas_repo/scripts/install_plugin_deps.py', 'atlas_repo/scripts/check_requirements.py', 'atlas_repo/scripts/__init__.py', 'atlas_repo/.git/packed-refs', 'atlas_repo/.git/config', 'atlas_repo/.git/index', 'atlas_repo/.git/description', 'atlas_repo/.git/HEAD', 'atlas_repo/.git/hooks/pre-applypatch.sample', 'atlas_repo/.git/hooks/pre-rebase.sample', 'atlas_repo/.git/hooks/pre-merge-commit.sample', 'atlas_repo/.git/hooks/post-update.sample', 'atlas_repo/.git/hooks/pre-push.sample', 'atlas_repo/.git/hooks/pre-receive.sample', 'atlas_repo/.git/hooks/push-to-checkout.sample', 'atlas_repo/.git/hooks/fsmonitor-watchman.sample', 'atlas_repo/.git/hooks/prepare-commit-msg.sample', 'atlas_repo/.git/hooks/commit-msg.sample', 'atlas_repo/.git/hooks/applypatch-msg.sample', 'atlas_repo/.git/hooks/pre-commit.sample', 'atlas_repo/.git/hooks/update.sample', 'atlas_repo/.git/logs/HEAD', 'atlas_repo/.git/logs/refs/heads/master', 'atlas_repo/.git/logs/refs/remotes/origin/HEAD', 'atlas_repo/.git/info/exclude', 'atlas_repo/.git/refs/heads/master', 'atlas_repo/.git/refs/remotes/origin/HEAD', 'atlas_repo/.git/objects/pack/pack-f6a3dd32fbb51bd88bf8f6872667b2c80c8833ee.pack', 'atlas_repo/.git/objects/pack/pack-f6a3dd32fbb51bd88bf8f6872667b2c80c8833ee.idx', 'atlas_repo/plugins/__PUT_PLUGIN_ZIPS_HERE__', 'atlas_repo/benchmark/benchmark_entrepreneur_gpt_with_difficult_user.py', 'atlas_repo/benchmark/__init__.py', 'atlas_repo/tests/test_agent.py', 'atlas_repo/tests/test_image_gen.py', 'atlas_repo/tests/context.py', 'atlas_repo/tests/test_ai_config.py', 'atlas_repo/tests/test_logs.py', 'atlas_repo/tests/test_config.py', 'atlas_repo/tests/test_commands.py', 'atlas_repo/tests/test_agent_manager.py', 'atlas_repo/tests/test_utils.py', 'atlas_repo/tests/milvus_memory_test.py', 'atlas_repo/tests/test_token_counter.py', 'atlas_repo/tests/utils.py', 'atlas_repo/tests/conftest.py', 'atlas_repo/tests/test_prompt_generator.py', 'atlas_repo/tests/test_workspace.py', 'atlas_repo/tests/test_api_manager.py', 'atlas_repo/tests/__init__.py', 'atlas_repo/tests/integration/agent_factory.py', 'atlas_repo/tests/integration/test_memory_management.py', 'atlas_repo/tests/integration/milvus_memory_tests.py', 'atlas_repo/tests/integration/test_git_commands.py', 'atlas_repo/tests/integration/memory_tests.py', 'atlas_repo/tests/integration/test_execute_code.py', 'atlas_repo/tests/integration/test_setup.py', 'atlas_repo/tests/integration/agent_utils.py', 'atlas_repo/tests/integration/weaviate_memory_tests.py', 'atlas_repo/tests/integration/test_local_cache.py', 'atlas_repo/tests/integration/conftest.py', 'atlas_repo/tests/integration/test_llm_utils.py', 'atlas_repo/tests/integration/__init__.py', 'atlas_repo/tests/integration/cassettes/test_memory_management/test_save_memory_trimmed_from_context_window.yaml', 'atlas_repo/tests/integration/cassettes/test_setup/test_generate_aiconfig_automatic_default.yaml', 'atlas_repo/tests/integration/cassettes/test_setup/test_generate_aiconfig_automatic_typical.yaml', 'atlas_repo/tests/integration/cassettes/test_setup/test_generate_aiconfig_automatic_fallback.yaml', 'atlas_repo/tests/integration/cassettes/test_llm_utils/test_get_ada_embedding_large_context.yaml', 'atlas_repo/tests/integration/cassettes/test_llm_utils/test_get_ada_embedding.yaml', 'atlas_repo/tests/integration/cassettes/test_local_cache/test_get_relevant.yaml', 'atlas_repo/tests/integration/challenges/utils.py', 'atlas_repo/tests/integration/challenges/conftest.py', 'atlas_repo/tests/integration/challenges/__init__.py', 'atlas_repo/tests/integration/challenges/memory/test_memory_challenge_b.py', 'atlas_repo/tests/integration/challenges/memory/test_memory_challenge_a.py', 'atlas_repo/tests/integration/challenges/memory/__init__.py', 'atlas_repo/tests/integration/challenges/memory/cassettes/test_memory_challenge_a/test_memory_challenge_a.yaml', 'atlas_repo/tests/integration/challenges/memory/cassettes/test_memory_challenge_b/test_memory_challenge_b.yaml', 'atlas_repo/tests/integration/goal_oriented/goal_oriented_tasks.md', 'atlas_repo/tests/integration/goal_oriented/test_write_file.py', 'atlas_repo/tests/integration/goal_oriented/test_browse_website.py', 'atlas_repo/tests/integration/goal_oriented/__init__.py', 'atlas_repo/tests/integration/goal_oriented/cassettes/test_browse_website/test_browse_website.yaml', 'atlas_repo/tests/integration/goal_oriented/cassettes/test_write_file/test_write_file.yaml', 'atlas_repo/tests/unit/test_get_self_feedback.py', 'atlas_repo/tests/unit/test_plugins.py', 'atlas_repo/tests/unit/test_browse_scrape_links.py', 'atlas_repo/tests/unit/test_chat.py', 'atlas_repo/tests/unit/test_browse_scrape_text.py', 'atlas_repo/tests/unit/test_web_selenium.py', 'atlas_repo/tests/unit/test_commands.py', 'atlas_repo/tests/unit/test_file_operations.py', 'atlas_repo/tests/unit/test_spinner.py', 'atlas_repo/tests/unit/test_json_parser.py', 'atlas_repo/tests/unit/test_json_utils_llm.py', 'atlas_repo/tests/unit/test_url_validation.py', 'atlas_repo/tests/unit/_test_json_parser.py', 'atlas_repo/tests/unit/test_llm_utils.py', 'atlas_repo/tests/unit/__init__.py', 'atlas_repo/tests/unit/data/test_plugins/Auto-GPT-Plugin-Test-master.zip', 'atlas_repo/tests/unit/models/test_base_open_api_plugin.py', 'atlas_repo/tests/mocks/mock_commands.py', 'atlas_repo/tests/mocks/__init__.py', 'atlas_repo/tests/vcr/openai_filter.py', 'atlas_repo/tests/vcr/__init__.py', 'atlas_repo/.github/FUNDING.yml', 'atlas_repo/.github/PULL_REQUEST_TEMPLATE.md', 'atlas_repo/.github/workflows/docker-release.yml', 'atlas_repo/.github/workflows/docker-cache-clean.yml', 'atlas_repo/.github/workflows/ci.yml', 'atlas_repo/.github/workflows/sponsors_readme.yml', 'atlas_repo/.github/workflows/docker-ci.yml', 'atlas_repo/.github/workflows/benchmarks.yml', 'atlas_repo/.github/workflows/documentation-release.yml', 'atlas_repo/.github/workflows/pr-label.yml', 'atlas_repo/.github/workflows/scripts/docker-ci-summary.sh', 'atlas_repo/.github/workflows/scripts/docker-release-summary.sh', 'atlas_repo/.github/ISSUE_TEMPLATE/1.bug.yml', 'atlas_repo/.github/ISSUE_TEMPLATE/2.feature.yml', 'atlas_repo/autogpt/app.py', 'atlas_repo/autogpt/configurator.py', 'atlas_repo/autogpt/main.py', 'atlas_repo/autogpt/singleton.py', 'atlas_repo/autogpt/logs.py', 'atlas_repo/autogpt/utils.py', 'atlas_repo/autogpt/cli.py', 'atlas_repo/autogpt/plugins.py', 'atlas_repo/autogpt/setup.py', 'atlas_repo/autogpt/__main__.py', 'atlas_repo/autogpt/__init__.py', 'atlas_repo/autogpt/spinner.py', 'atlas_repo/autogpt/memory_management/store_memory.py', 'atlas_repo/autogpt/memory_management/summary_memory.py', 'atlas_repo/autogpt/json_utils/llm_response_format_1.json', 'atlas_repo/autogpt/json_utils/json_fix_llm.py', 'atlas_repo/autogpt/json_utils/json_fix_general.py', 'atlas_repo/autogpt/json_utils/__init__.py', 'atlas_repo/autogpt/json_utils/utilities.py', 'atlas_repo/autogpt/processing/text.py', 'atlas_repo/autogpt/processing/html.py', 'atlas_repo/autogpt/processing/__init__.py', 'atlas_repo/autogpt/memory/local.py', 'atlas_repo/autogpt/memory/pinecone.py', 'atlas_repo/autogpt/memory/no_memory.py', 'atlas_repo/autogpt/memory/weaviate.py', 'atlas_repo/autogpt/memory/milvus.py', 'atlas_repo/autogpt/memory/base.py', 'atlas_repo/autogpt/memory/redismem.py', 'atlas_repo/autogpt/memory/__init__.py', 'atlas_repo/autogpt/commands/write_tests.py', 'atlas_repo/autogpt/commands/web_playwright.py', 'atlas_repo/autogpt/commands/improve_code.py', 'atlas_repo/autogpt/commands/google_search.py', 'atlas_repo/autogpt/commands/audio_text.py', 'atlas_repo/autogpt/commands/web_selenium.py', 'atlas_repo/autogpt/commands/image_gen.py', 'atlas_repo/autogpt/commands/web_requests.py', 'atlas_repo/autogpt/commands/command.py', 'atlas_repo/autogpt/commands/times.py', 'atlas_repo/autogpt/commands/file_operations.py', 'atlas_repo/autogpt/commands/git_operations.py', 'atlas_repo/autogpt/commands/twitter.py', 'atlas_repo/autogpt/commands/analyze_code.py', 'atlas_repo/autogpt/commands/execute_code.py', 'atlas_repo/autogpt/commands/__init__.py', 'atlas_repo/autogpt/config/ai_config.py', 'atlas_repo/autogpt/config/config.py', 'atlas_repo/autogpt/config/__init__.py', 'atlas_repo/autogpt/prompts/prompt.py', 'atlas_repo/autogpt/prompts/generator.py', 'atlas_repo/autogpt/prompts/__init__.py', 'atlas_repo/autogpt/url_utils/__init__.py', 'atlas_repo/autogpt/url_utils/validators.py', 'atlas_repo/autogpt/workspace/workspace.py', 'atlas_repo/autogpt/workspace/__init__.py', 'atlas_repo/autogpt/llm/modelsinfo.py', 'atlas_repo/autogpt/llm/api_manager.py', 'atlas_repo/autogpt/llm/chat.py', 'atlas_repo/autogpt/llm/llm_utils.py', 'atlas_repo/autogpt/llm/token_counter.py', 'atlas_repo/autogpt/llm/base.py', 'atlas_repo/autogpt/llm/__init__.py', 'atlas_repo/autogpt/llm/providers/openai.py', 'atlas_repo/autogpt/llm/providers/__init__.py', 'atlas_repo/autogpt/agent/agent_manager.py', 'atlas_repo/autogpt/agent/agent.py', 'atlas_repo/autogpt/agent/__init__.py', 'atlas_repo/autogpt/models/base_open_ai_plugin.py', 'atlas_repo/autogpt/speech/brian.py', 'atlas_repo/autogpt/speech/eleven_labs.py', 'atlas_repo/autogpt/speech/gtts.py', 'atlas_repo/autogpt/speech/say.py', 'atlas_repo/autogpt/speech/base.py', 'atlas_repo/autogpt/speech/macos_tts.py', 'atlas_repo/autogpt/speech/__init__.py', 'atlas_repo/autogpt/js/overlay.js', 'atlas_repo/docs/usage.md', 'atlas_repo/docs/plugins.md', 'atlas_repo/docs/testing.md', 'atlas_repo/docs/index.md', 'atlas_repo/docs/code-of-conduct.md', 'atlas_repo/docs/setup.md', 'atlas_repo/docs/contributing.md', 'atlas_repo/docs/imgs/openai-api-key-billing-paid-account.png', 'atlas_repo/docs/configuration/search.md', 'atlas_repo/docs/configuration/voice.md', 'atlas_repo/docs/configuration/imagegen.md', 'atlas_repo/docs/configuration/memory.md', 'atlas_repo/.devcontainer/docker-compose.yml', 'atlas_repo/.devcontainer/Dockerfile', 'atlas_repo/.devcontainer/devcontainer.json']
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/atlas/autogpt/__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1635, in invoke
    rv = super().invoke(ctx)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/home/atlas/autogpt/cli.py", line 90, in main
    run_auto_gpt(
  File "/home/atlas/autogpt/main.py", line 157, in run_auto_gpt
    agent.start_interaction_loop()
  File "/home/atlas/autogpt/agent/agent.py", line 94, in start_interaction_loop
    assistant_reply = chat_with_ai(
  File "/home/atlas/autogpt/llm/chat.py", line 166, in chat_with_ai
    agent.summary_memory = update_running_summary(
  File "/home/atlas/autogpt/memory_management/summary_memory.py", line 114, in update_running_summary
    current_memory = create_chat_completion(messages, cfg.fast_llm_model)
  File "/home/atlas/autogpt/llm/llm_utils.py", line 166, in create_chat_completion
    response = api_manager.create_chat_completion(
  File "/home/atlas/autogpt/llm/api_manager.py", line 55, in create_chat_completion
    response = openai.ChatCompletion.create(
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4819 tokens. Please reduce the length of the messages.

katmai avatar May 02 '23 10:05 katmai

I also encountered the same problem and couldn't continue the project

Oldcat2333 avatar May 02 '23 12:05 Oldcat2333

It's coming from updating memory summary. That appears to be a global behaviour. Your are constrained by 4096 tokens context window given the model you are using - likely gpt 3.5 - if you used gpt-4, you would not error out here. I can think of adding chunking for a certain class of commands?

filip-michalsky avatar May 02 '23 12:05 filip-michalsky

It's coming from updating memory summary. That appears to be a global behaviour. Your are constrained by 4096 tokens context window given the model you are using - likely gpt 3.5 - if you used gpt-4, you would not error out here. I can think of adding chunking for a certain class of commands?

i've already set the token limit to 4000 since i am on GPT3.5, but it's not working, so idk.

### LLM MODEL SETTINGS
## FAST_TOKEN_LIMIT - Fast token limit for OpenAI (Default: 4000)
## SMART_TOKEN_LIMIT - Smart token limit for OpenAI (Default: 8000)
## When using --gpt3only this needs to be set to 4000.
# FAST_TOKEN_LIMIT=4000
SMART_TOKEN_LIMIT=4000

katmai avatar May 02 '23 21:05 katmai

ghly targeted prospect lists. Bulks. Search or verify contact lists in minutes with bulk tasks. Enrichment." } ]
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/app/autogpt/__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1635, in invoke
    rv = super().invoke(ctx)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/app/autogpt/cli.py", line 90, in main
    run_auto_gpt(
  File "/app/autogpt/main.py", line 171, in run_auto_gpt
    agent.start_interaction_loop()
  File "/app/autogpt/agent/agent.py", line 112, in start_interaction_loop
    assistant_reply = chat_with_ai(
  File "/app/autogpt/llm/chat.py", line 165, in chat_with_ai
    agent.summary_memory = update_running_summary(
  File "/app/autogpt/memory_management/summary_memory.py", line 123, in update_running_summary
    current_memory = create_chat_completion(messages, cfg.fast_llm_model)
  File "/app/autogpt/llm/llm_utils.py", line 166, in create_chat_completion
    response = api_manager.create_chat_completion(
  File "/app/autogpt/llm/api_manager.py", line 55, in create_chat_completion
    response = openai.ChatCompletion.create(
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 7009 tokens. Please reduce the length of the messages.
my@my-Mac-mini auto-gpt % 

0xFlo avatar May 03 '23 23:05 0xFlo

It's coming from updating memory summary. That appears to be a global behaviour. Your are constrained by 4096 tokens context window given the model you are using - likely gpt 3.5 - if you used gpt-4, you would not error out here. I can think of adding chunking for a certain class of commands?

i've already set the token limit to 4000 since i am on GPT3.5, but it's not working, so idk.

### LLM MODEL SETTINGS
## FAST_TOKEN_LIMIT - Fast token limit for OpenAI (Default: 4000)
## SMART_TOKEN_LIMIT - Smart token limit for OpenAI (Default: 8000)
## When using --gpt3only this needs to be set to 4000.
# FAST_TOKEN_LIMIT=4000
SMART_TOKEN_LIMIT=4000
### LLM MODEL SETTINGS
## FAST_TOKEN_LIMIT - Fast token limit for OpenAI (Default: 4000)
## SMART_TOKEN_LIMIT - Smart token limit for OpenAI (Default: 8000)
## When using --gpt3only this needs to be set to 4000.
 FAST_TOKEN_LIMIT=3000
 SMART_TOKEN_LIMIT=3000
 
 ### EMBEDDINGS
## EMBEDDING_MODEL       - Model to use for creating embeddings
## EMBEDDING_TOKENIZER   - Tokenizer to use for chunking large inputs
## EMBEDDING_TOKEN_LIMIT - Chunk size limit for large inputs
 EMBEDDING_MODEL=text-embedding-ada-002
 EMBEDDING_TOKENIZER=cl100k_base
 EMBEDDING_TOKEN_LIMIT=8191

same, not sure if I was running GPT3 only tough

0xFlo avatar May 03 '23 23:05 0xFlo

I am experiencing the same behavior since i updated to version 3.0

IuliuNovac avatar May 04 '23 10:05 IuliuNovac

I got this error also in the latest stable branch v0.3.0

wongcheehong avatar May 04 '23 11:05 wongcheehong

Same here on the latest version can't move forward building

eobulegend avatar May 04 '23 19:05 eobulegend

Same here

brandonaviram avatar May 04 '23 21:05 brandonaviram

Same question

AtaraxiaRun avatar May 05 '23 06:05 AtaraxiaRun

I am new to this i think i have the exact same issue as its the last request i will post all of it here just in case im missing something. Thanks everyone

File "c:\Autogpt\Auto-GPT\autogpt_main_.py", line 5, in autogpt.cli.main() File "C:\Users\ROOT\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 1130, in call return self.main(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ROOT\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 1055, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "C:\Users\ROOT\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 1635, in invoke rv = super().invoke(ctx) ^^^^^^^^^^^^^^^^^^^ File "C:\Users\ROOT\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ROOT\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 760, in invoke return __callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ROOT\AppData\Roaming\Python\Python311\site-packages\click\decorators.py", line 26, in new_func return f(get_current_context(), *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\Autogpt\Auto-GPT\autogpt\cli.py", line 90, in main run_auto_gpt( File "c:\Autogpt\Auto-GPT\autogpt\main.py", line 186, in run_auto_gpt agent.start_interaction_loop() File "c:\Autogpt\Auto-GPT\autogpt\agent\agent.py", line 112, in start_interaction_loop assistant_reply = chat_with_ai( ^^^^^^^^^^^^^ File "c:\Autogpt\Auto-GPT\autogpt\llm\chat.py", line 244, in chat_with_ai assistant_reply = create_chat_completion( ^^^^^^^^^^^^^^^^^^^^^^^ File "c:\Autogpt\Auto-GPT\autogpt\llm\llm_utils.py", line 166, in create_chat_completion response = api_manager.create_chat_completion( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\Autogpt\Auto-GPT\autogpt\llm\api_manager.py", line 55, in create_chat_completion response = openai.ChatCompletion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ROOT\AppData\Roaming\Python\Python311\site-packages\openai\api_resources\chat_completion.py", line 25, in create return super().create(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ROOT\AppData\Roaming\Python\Python311\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File "C:\Users\ROOT\AppData\Roaming\Python\Python311\site-packages\openai\api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ROOT\AppData\Roaming\Python\Python311\site-packages\openai\api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "C:\Users\ROOT\AppData\Roaming\Python\Python311\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, you requested 4289 tokens (2521 in the messages, 1768 in the completion). Please reduce the length of the messages or completion.

TheGodMotherG avatar May 05 '23 06:05 TheGodMotherG

Same problem with any branch ( Master or Stable 0.3.0/0.2.2 )

trungthanh1288 avatar May 05 '23 08:05 trungthanh1288

I cant move project with this... Same problem Thanks"

andrebauru avatar May 05 '23 11:05 andrebauru

I am currently working on a possible fix for this, as in theory I think this is caused by total tokens in request for gpt3 model. There is a 'send_token_limit' variable that is currently subtracting 1000 to retain for the response request. I am testing out 1500 for this to see if it still errors. I am shooting in the dark here, but I will let you all know if this resolves the issue or not.

kryptobaseddev avatar May 05 '23 16:05 kryptobaseddev

Hi Guys, have the same issue. the number of tokens can be significantly higher. workin for hours on a solution....unfortunately without success so far.

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 25424 tokens. Please reduce the length of the messages.

digitalized-one avatar May 05 '23 18:05 digitalized-one

Same problem here since I upgrade to 0.3.0... why is the agent sending messages longer than 4000?

anybam avatar May 05 '23 18:05 anybam

It's a hard limit imposed by openai

anonhostpi avatar May 05 '23 18:05 anonhostpi

Same issue here

chrischjh avatar May 06 '23 02:05 chrischjh

Same issue when i update to 0.3.0

cuidok avatar May 06 '23 03:05 cuidok

+1

Knightmare6890 avatar May 06 '23 05:05 Knightmare6890

i have same problem, when i use langchain DB_chain to query mysql database.

This model's maximum context length is 4097 tokens, however you requested 4582 tokens (4326 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.

stephenzhao avatar May 06 '23 06:05 stephenzhao

+1

raptor-bot avatar May 06 '23 15:05 raptor-bot

+1

jdewind avatar May 06 '23 23:05 jdewind

The same problem, but I have a slightly different error message, as: openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 10549 tokens (10549 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

yuis-ice avatar May 07 '23 05:05 yuis-ice

I tried to catch the exception. Works on some occasions but it seems to cause other issues or the program terminates since the response is none. In general, this issue is one of the biggest issues in autogpt currently. You basically can't use it since it breaks down every 2 seconds depending on the task you have given it.

utopien1337 avatar May 07 '23 11:05 utopien1337

+1

pagloria avatar May 07 '23 12:05 pagloria

HI All, so I searched a bit and basically what has to be done is this, but of course adapted to autoGPT: (see also here https://blog.devgenius.io/how-to-get-around-openai-gpt-3-token-limits-b11583691b32)

def break_up_file(tokens, chunk_size, overlap_size): if len(tokens) <= chunk_size: yield tokens else: chunk = tokens[:chunk_size] yield chunk yield from break_up_file(tokens[chunk_size-overlap_size:], chunk_size, overlap_size)

def break_up_file_to_chunks(filename, chunk_size=2000, overlap_size=100): with open(filename, 'r') as f: text = f.read() tokens = word_tokenize(text) return list(break_up_file(tokens, chunk_size, overlap_size))

def convert_to_detokenized_text(tokenized_text): prompt_text = " ".join(tokenized_text) prompt_text = prompt_text.replace(" 's", "'s") return detokenized_text

filename = "/content/drive/MyDrive/Colab Notebooks/minutes/data/Round_22_Online_Kickoff_Meeting.txt"

prompt_response = [] chunks = break_up_file_to_chunks(filename)

for i, chunk in enumerate(chunks): prompt_request = "Summarize this meeting transcript: " + convert_to_detokenized_text(chunks[i]) response = openai.Completion.create( model="text-davinci-003", prompt=prompt_request, temperature=.5, max_tokens=500, top_p=1, frequency_penalty=0, presence_penalty=0 )

prompt_response.append(response["choices"][0]["text"].strip())

utopien1337 avatar May 07 '23 13:05 utopien1337

found some interesting solutions for the issue...if you look outside autogpt the issue is also well known.

  1. https://medium.com/@shweta-lodha/how-to-deal-with-openai-token-limit-issue-part-1-d0157c9e4d4e
  2. https://www.youtube.com/watch?v=_vetq4G0Gsc
  3. https://www.youtube.com/watch?v=Oj1GUJnJrWs
  4. https://www.youtube.com/watch?v=xkCzP4-YoNA

digitalized-one avatar May 07 '23 16:05 digitalized-one

+1

GeorgeCodreanu avatar May 09 '23 08:05 GeorgeCodreanu

+1

AlexSerbinov avatar May 09 '23 13:05 AlexSerbinov