AutoGPT
AutoGPT copied to clipboard
Failure: command list_files and read_file returned too much output.
⚠️ Search for existing issues first ⚠️
- [X] I have searched the existing issues, and there is no existing issue for my problem
Which Operating System are you using?
Windows
Which version of Auto-GPT are you using?
Stable (branch)
GPT-3 or GPT-4?
GPT-3.5
Steps to reproduce 🕹
bunch of files in a workspace folder
Current behavior 😯
Error: Attempted to access absolute path 'C:\Users\test\Desktop\Auto-GPT-0.3.0\autogpt\auto_gpt_workspace' in workspace 'C:\Users\test\Desktop\Auto-GPT-0.3.0\autogpt\auto_gpt_workspace'. NEXT ACTION: COMMAND = list_files ARGUMENTS = {'directory': 'C:\Users\test\Desktop\Auto-GPT-0.3.0\autogpt\auto_gpt_workspace'} SYSTEM: Failure: command list_files returned too much output. Do not execute this command again with the same arguments.
Error: Attempted to access absolute path 'C:\Users\rrayapaati\Desktop\Auto-GPT-0.3.0\autogpt\auto_gpt_workspace\smoketests_basic_qemu.yml' in workspace 'C:\Users\rrayapaati\Desktop\Auto-GPT-0.3.0\autogpt\auto_gpt_workspace'. NEXT ACTION: COMMAND = read_file ARGUMENTS = {'filename': 'C:\Users\rrayapaati\Desktop\Auto-GPT-0.3.0\autogpt\auto_gpt_workspace\smoketests_basic_qemu.yml'} SYSTEM: Failure: command read_file returned too much output. Do not execute this command again with the same arguments.
Expected behavior 🤔
The process of handling large amounts of data by dividing it into multiple segments from files and lists of files.
Your prompt 📝
# Paste your prompt here
Your Logs 📒
<insert your logs here>
same issue. the json I want it to ingest is 16KB, not huge
edit: I just tried it with a json file that has 680 bytes, still says too large.
Me too, I just let it read a Markdown file (test.md) with few lines, and this error message appeared.
Is it possible to stream the output similarly to how Node.js handles large data?
Not a solution for the bug itself, but you can get around it by instructing it to use dir (windows default) or ls (linux default) instead of list_file when listing files
ran into this today with completely different code/commands - it's a hard-coded check that prevents the system from ingesting the data. I would have thought, we could just as well add the output to memory and then chunk it back to the LLM using autogpt/processing/text.py . Thoughts / ideas ?
ran into this today with completely different code/commands - it's a hard-coded check that prevents the system from ingesting the data. I would have thought, we could just as well add the output to memory and then chunk it back to the LLM using autogpt/processing/text.py . Thoughts / ideas ?
Haven't looked into the code but it seems that it's actually a hard-coded thing. Asking AutoGPT to read a small, local file got me the same error message. I just copied the contents of the file to a pastebin URL and asked AutoGPT to read it, this time no problem.
If I remember correctly, there was not chunking taking place where I looked (somewhere in file utils I believe) - so this may be one of those cases were nobody expected that dumping a directory tree or a text file, might require chunking. We will see what things look like once the re-arch is done
Oh bummer. It was going to attempt to debug its own code it wrote for me for a React app. I was so excited :(
Unfortunately Autogpt is so close to being a professional ally, but its just not there yet. Too many bugs, not smart enough. Almost. Sigh
The error comes from agent.py:
if result_tlength + memory_tlength + 600 > cfg.fast_token_limit:
result = f"Failure: command {command_name} returned too much output. \
Do not execute this command again with the same arguments."
which refers to this .evn variable in config.py
self.fast_token_limit = int(os.getenv("FAST_TOKEN_LIMIT", 4000))
Setting that to a higher number reveals that it's an OpenAI limitation:
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, you requested 24497 tokens (2527 in the messages, 21970 in the completion). Please reduce the length of the messages or completion.
Seems the only thing you can do is instruct AutoGPT to buffer the file to send to OpenAI, which will result in more API calls. This is in the OpenAI docs. What are tokens and how to count them?
Thank you!
On Sun, May 14, 2023 at 1:29 PM monkeyden @.***> wrote:
The error comes from agent.py:
if result_tlength + memory_tlength + 600 > cfg.fast_token_limit: result = f"Failure: command {command_name} returned too much output. \ Do not execute this command again with the same arguments."
which refers to this .evn variable in config.py
self.fast_token_limit = int(os.getenv("FAST_TOKEN_LIMIT", 4000))
Setting that to a higher number reveals that it's an OpenAI limitation:
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, you requested 24497 tokens (2527 in the messages, 21970 in the completion). Please reduce the length of the messages or completion.
Seems the only thing you can do is instruct AutoGPT to buffer the file to send to OpenAI, which will result in more API calls. This is in the OpenAI docs. What are tokens and how to count them? https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them
— Reply to this email directly, view it on GitHub https://github.com/Significant-Gravitas/Auto-GPT/issues/3742#issuecomment-1546994617, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7HBPXU7LM4BX2A3ZDAVX4DXGE6DTANCNFSM6AAAAAAXUPJM4E . You are receiving this because you commented.Message ID: @.***>
same issue... any update?
how do you "instruct AutoGPT to buffer the file to send to OpenAI"???
Thanks
What you probably mean is called chunking
hi @Boostrix thanks for your help!
So... how do I instruct auto-gpt to "chunk" the data and dont return error "Failure: command execute_python_file returned too much output. Do not execute this command again with the same arguments."?
Skim the source tree for occurrences of "ingesting" data (text utils).
However, depending on the use case, it's possible to do so but probably not what you want to do.
It might make more sense to keep this logic agent side and avoid the llm bottleneck.
If that's possible, depends on what you're trying to do.
Im trying to build a python script to connect binance API and do some trading...
Another problem I found its the persistence of the data using docker, for example:
auto-gpt is trying to: "AUTHORISED COMMANDS LEFT: 8 Executing command 'apt-get update && apt-get install python3 python3-pip python3-venv' in working directory '/app/autogpt/auto_gpt_workspace'"
This task is taking soooo long, more than 15 minutes, and im not sure if its doing something... BUT everytime auto-gpt gets to install / update something inside docker container the next time I run auto-gpt everything is gone! how can I to get persistent data on the container??
Thanks
That is not due to autogpt, it is how docker works, you should probably consider reading up on docker itself
Also, this seems unrelated to the original issue - please open your own, and separate, one. Thank you
Thanks @Boostrix u are so kind
same issue, reading a txt file, i got the error:SYSTEM: Failure: command read_file returned too much output. Do not execute this command again with the same arguments.
we probably need to use wrappers to optionally chunk up output from commands
Is there a recommended workaround for now? In my case, I need it to read a couple files, produce a new file synthesizing what's in the first couple files, and save it to the workspace. Then I (might) need it to read in again the file it produced, and refine it to generate yet another file. The files it'll be producing will be big enough to cause this error message, I'm pretty sure.
Our recent PR, #4652, addressed a token limit issue in a update_running_summary function by doing batch summarization. Do you still see the max token limit issue? Please share the call stack.
having this exact issue only trying to read files in a folder
This is why SQL statements can be filtered and restricted to a certain number of matches, which might help us too
Having the same issue here with a .txt file.
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.
This issue was closed automatically because it has been stale for 10 days with no activity.
Same issue, any solution?
also have the same issue