-
Notifications
You must be signed in to change notification settings - Fork 44.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't incapacitate yourself! #1240
Conversation
@lfricken There are conflicts now |
10e9a22
to
1073954
Compare
@nponeccop fixed |
@lfricken CI fails |
@nponeccop Fixed linting issues and checked tests |
@@ -38,6 +38,9 @@ def get_prompt() -> str: | |||
prompt_generator.add_constraint( | |||
'Exclusively use the commands listed in double quotes e.g. "command name"' | |||
) | |||
prompt_generator.add_constraint( | |||
"Use subprocesses for commands that will not terminate within a few minutes" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's correct. If the command would terminate within a few minutes (aka quickly) you can use the normal execute command. If it would take more than a few minutes, use subprocess.
* subprocesses * fix lint * fix more lint * fix merge * fix merge again
* subprocesses * fix lint * fix more lint * fix merge * fix merge again
Background
If Auto-GPT4 tries to host a server or do another task that can block the current process, it may get stuck if the command never terminates (somewhat common).
Changes
Adds the
Execute Shell Command Subprocess
command which uses Popen. Auto-GPT now knows to use this if the command will take too long. When I ask it to host a web server or do other tasks that are reasonable for a subprocess, it does so. The PID of the process is returned so Auto-GPT can kill it later if it wants, or if the user asks. In my use cases it was smart enough to save the PIDs to a file and name them to remember for later.This will encourage it to use more CPU cores for tasks that could otherwise take a while, and help it avoid bottlenecking on anything but API calls. It is of course in theory capable of achieving this with existing commands, but it doesn't. If it were that smart, we would only have needed a single command for it to be able to run python code and it could build its own instruction set. It seems to need these "available command" hints.
Documentation
It is documented as well as the other commands, and Auto-GPT figures it out.
Test Plan
Before and after this dev:
PR Quality Checklist
Additional Thoughts
Is Popen the best solution? It's just what GPT4 recommended when I asked ChatGPT4 how to address this problem 😊