AutoGPT
AutoGPT copied to clipboard
shell command execution hangs AutoGPT forever
⚠️ Search for existing issues first ⚠️
- [X] I have searched the existing issues, and there is no existing issue for my problem
Which Operating System are you using?
Linux
GPT-3 or GPT-4?
GPT-3.5
Steps to reproduce 🕹
sudo python3 -m autogpt --continuous --gpt3only y
Current behavior 😯
git rev-parse HEAD 89539d0cf169e6deb3bb475ab771e970462a5687
NEXT ACTION: COMMAND = execute_shell ARGUMENTS = {'command_line': 'cd /home/user/Auto-GPT/auto_gpt_workspace/twitterscraper && nano elon_tweets.json'}
Executing command 'cd /home/user/Auto-GPT/auto_gpt_workspace/twitterscraper && nano elon_tweets.json' in working directory '/home/user/Auto-GPT/auto_gpt_workspace'
another one example
NEXT ACTION: COMMAND = execute_shell ARGUMENTS = {'command_line': 'ping gitlab.com'}
Executing command 'ping gitlab.com' in working directory '/home/user/Auto-GPT/auto_gpt_workspace'
Expected behavior 🤔
skip execution of broken commands. Probably execute all shell commands in "&" (Linux) detached mode
Your prompt 📝
# Paste your prompt here
Your Logs 📒
<insert your logs here>
Similar issue, nano and vim.
After doing some research I've found that there doesn't really seem to be a solution for detecting if a program is blocking or non-blocking, meaning that the only method for mitigating this in a simple fashion is to set a time-out, which seems rather unfortunate. Hopefully, I'm wrong and someone smarter than me knows how to do so.
SYSTEM: Command execute_shell returned: Error: [Errno 2] No such file or directory: '/home/appuser/autogpt/auto_gpt_workspace'
SYSTEM: Command execute_shell returned: Error: [Errno 2] No such file or directory: '/home/appuser/autogpt/auto_gpt_workspace'
This is a different issue, please find an issue that applies or create your own.
After doing some research I've found that there doesn't really seem to be a solution for detecting if a program is blocking or non-blocking, meaning that the only method for mitigating this in a simple fashion is to set a time-out, which seems rather unfortunate. Hopefully, I'm wrong and someone smarter than me knows how to do so.
how about to detach process then collect its logs and files?
Same here, for example with running python -m http.server 8000
. The AI doesn't recognise activities that should rather run in background, so this would never lead to anything.
The culprit seems to be here:
https://github.com/Significant-Gravitas/Auto-GPT/blob/fdd79223b0c6132e1d7fc5127e9ca02fabaea9e5/autogpt/commands/execute_code.py#L113
subprocess.run()
has a Timeout available, so it might just be a matter of adding it there as well as in the other two functions in that file, either hardcoded to something like timeout=120
or else in a configurable value via .env
. It would be perfect if AI could then override this timeout if it has the confidence to do so, e.g. if it's known to be an time-consuming task then override the setting to 480?
A more complex solution could be to use subprocess.Popen()
to run the process in the background and continue, then the agent would do_nothing
for a while and if the process has not produced output then it would have the possibility to kill the process somehow?
the problem then is now computationally expensive operations will not work whatsoever, due to always being nuked.
This is called the halting problem, and we won't solve it here without a timeout. May be worth opening a pr for it
Update: the psutil stuff would work to recognize sleeping processes and those not doing much in terms of I/O: #4123
After doing some research I've found that there doesn't really seem to be a solution for detecting if a program is blocking or non-blocking, meaning that the only method for mitigating this in a simple fashion is to set a time-out, which seems rather unfortunate. Hopefully, I'm wrong and someone smarter than me knows how to do so.
Note, the following won't solve the halting problem obviously, but we can look for blocking APIs and interrupt the child processes after a given timeout. More generally, adding a blacklist of interactive programs (nano, vim etc) would seem to suffice though: https://github.com/Significant-Gravitas/Auto-GPT/issues/1327#issuecomment-1529012402
So, while this won't magically solve the halting problem, if a sub-process is using certain blocking API calls (or even just has got those listed in the ELF/PE file as imports), needs an excessive amount of time, and its resource utilization (CPU/RAM) isn't growing/varying over time (sleeping), it's probably a pretty safe candidate for being killed/restarted by the parent agent.
https://github.com/Significant-Gravitas/Auto-GPT/issues/1327
ran gedit (and it's not even running on a desktop OS)
That would be another potential heuristics: GUI apps can be identified via their DLL/DSO imports (think X11 for gedit). I suppose, just like the preparation stage mentioned elsewhere would be necessary to provide system specific context (available tooling, OS/platform etc). A final "validation" stage (#3743) might be needed to apply constraints like CLI tools not being GUI tools (obviously that won't work too well for hybrid binaries that support both modes)
Similar issue, nano and vim.
we could simply add a CSV blacklist of commands that shall not be executed, because these are known to interactive - given how frequently this issue has been raised here on the tracker, that would seem like a straightforward option to at least prevent people from getting stuck inside nano/vim
More generally, there could also be a preference configured to use command line editors like sed/awk and perl when needed.
SYSTEM: Command execute_shell returned: Error: [Errno 2] No such file or directory: '/home/appuser/autogpt/auto_gpt_workspace'
Looks like you just need to add that folder.
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.
This issue was closed automatically because it has been stale for 10 days with no activity.