-
Notifications
You must be signed in to change notification settings - Fork 44.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal for a new command to prompt the user/ ask the user questions. #1548
Comments
Regular chatgpt can do that. |
Regular chatgpt wouldn't be able to accomplish the second goal using the first like I think autogpt should work. |
so currently auto-gpt cannot communicate with user? I came to same point as well, just thought there is some bug |
Is this being worked on? I think that would take auto-gpt to the next level. If you see that auto-gpt is going in the wrong direction, you could take corrective action that way. |
I think this is a no-brainer as something to pursue at least in some fashion. It has the logical conclusions far off of having a hierarchy of notification triggers to the necessary groups of operators to in a sense choose by committee how to resolve each problem in the general sense so that the next user will not ever face it. Obviously that's a bit oh a hypothetical at the moment but it seems logically like the direction to head in to make the system do more of our work for us better and for more time without our intervention :-) |
if it's internally using the agent API, a sub-agent could always notify its parent agent via the agent messaging API that it requires more info due to some issue (such as broken loops) - and in fact, given some comments here, it seems that use more agents that coordinate their work mutually, would simplify the design quite a bit, because then it's the equivalent of a "process manager" with tasks that it has to manage |
I agree fully. If I wasn't about to finally get to bed I would probably take a stab at some strange implementation |
the neat thing really being the option to simply "kill" a task (process) and tell the parent agent that something didn't work out, so that it has to be restarted. As mentioned in another issue: if dependencies between tasks could be encoded, such situations would not be that problematic, because the agent could pursue tasks that don't depend on the cancelled task, i.e. being able to pursue multiple avenues in such situations
I in this sort of setup, the idea would be treat an agent as a functional entity - not unlike a function in programming: so that it has some input (parameters/arguments) and a well-defined output (return value), with the option of throwing an exception (via the inter-agent API). This way, all tasks would be decomposed into simpler "functions" that would be executed by sub-agents - either the sub-agent succeeds with a return value, or it fails and may need to trigger a sub-agent to fix itself, or it "breaks" and needs to throw an exception, informing the parent agent to adapt its plan/strategy. Also, sub-agents would never need to know anything about plans/goals higher up in the chain/hierarchy, not unlike a function call / stack frame, which isolates function calls from arguments on the stack that are not relevant to its own invocation Please help take a look at the following PR #3083, hopefully this can then be closed or made more specific ? |
Hi, just a short update - thanks for taking the time to update your PR, your PR was today discussed on Discord by several devs and it's being assigned to be reviewed right now! So, given that there seem to be at least 5+ different PRs in the pipeline currently using similar but still a different method, it is likely that we're going to take parts from each and every one of these PRs, to ensure that the feature is well-aligned with the ongoing re-arch effort. This means, you will end up in the credits automagically - but won't have to do any more work from now on. If there are remaining questions, someone is going to contact you, the PR is being reviewed by @gravelBridge |
Duplicate of #150 |
Duplicates
Summary 💡
Currently all user input happens primarily in the initial setup phase and then it works completely autonomously, while this is fine for some tasks other tasks would benefit from the AI having user clarification or input.
Examples 🌈
For example if you were to ask auto gpt to make you a resume you might want to do something like.
Enter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously' Enter nothing to load defaults, enter nothing when finished. Goal 1: Ask me questions as to what info you need from me in order so you can write my resume, untill you have enough info or I say you have enough info. Goal 2: Write a resume as a word doc. Goal 3: When all goals are complete shutdown. Goal 4: Using memory of type: LocalCache RESUME-GPT THOUGHTS: I need more information about what kind of resume you are looking for. Could you tell me a little about your experience as a software developer? What kind of job are you applying for and what are the requirements? What are your main areas of expertise? Do you have any specific accomplishments you would like to highlight? REASONING: I need more specific details to create a tailored resume. PLAN: - Ask user more specific questions - Determine job requirements - Highlight user's areas of expertise CRITICISM: I may need to ask follow-up questions to clarify some details. Attempting to fix JSON by finding outermost brackets Apparently json was fixed. NEXT ACTION: COMMAND = do_nothing ARGUMENTS = {} Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for Resume-GPT... Input:y
In this case AutoGPT would not be able to accomplish goal one, because while it will be able to ask the user questions it currently has no way of taking in the users answers, in order to accomplish goal 2.
Expected behavior:
A new prompt_user command that will first output an AI generated prompt and then wait for user response.
Motivation 🔦
Autogpt is currently incapable of many tasks that can't be done without full context and this change can also allow less specific prompts in the future, with the autogpt heightening the quality of human prompts.
The text was updated successfully, but these errors were encountered: