-
Notifications
You must be signed in to change notification settings - Fork 44.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow asking human assistance/feedback incontinuous mode #150
Comments
we're going to need some prompt engineering to ask the ai about its confidence.
Please provide better prompt options that might cost less token cc @Torantulino Remark: |
Some time ago I read a paper discussing AI control and compliance. (I can't find this paper right now).
..I put this comment just for context... |
The best experience with reports is when they can read all the source materials and try to figure it out themselves, but still come and present an awesome, intelligent summary to me and want to discuss their plans to check if they match my understanding. Typically we discuss until it's clear we're in agreement that this is a reasonable and valuable course of action. |
I would love to collaborate on this and have been working on a framework for classifying the risk and type of AI tasks for proper delegation: The TACTIC framework provides a comprehensive approach to managing AI commands, covering a wide range of risks and approval levels. By implementing this tiered structure, we can maintain control over AI-driven processes while maximizing efficiency, security, and accountability. Tier 1: Transparent (Low risk, read-only tasks) Tier 2: Assisted (Low to medium risk, write-access tasks) Tier 3: Collaborative (Medium risk, communication tasks) Tier 4: Transactional (Medium to high risk, financial tasks) Tier 5: Intimate (High risk, sensitive tasks) Tier 6: Critical (Extremely high risk, irreversible or high-impact tasks) I think we need a hybrid approach to classifying commands that combines AI with human involvement to provide a more reliable solution. So initial classification by GPT but a human-in-the-loop review process, especially for tasks that fall under higher risk categories. This step ensures that the AI categorization is accurate and relevant, providing an additional layer of validation. |
See also: #3396 (comment) |
One thing I think myself and a lot of others have encountered is wanting to "step in" to continuous mode, make some instructions, and turn it back on. Same thing for choosing some number of automated steps, being able to choose like 50, see something going wrong, give some guidance and turn back over control. |
See also: #1548 thinking out loud: this might be easier than we think using the agent messaging API - we really only need to support some form of keyboard handling at the top-level that then messages the underlying agent (master) to give some instructions or change the number of automated steps to resume the agent afterwards. And indeed, under the hood, agents need basically the exact same functionality anyway to be able to mutually influence themselves by having parent agents "guide" sub-agents. Thus, we could just as well be using the same mechanism for the top-level/outer-most agent, to let the human interactively control/guide the agent. Python's keyboard module probably has most of the building blocks in place to register an event handler that triggers this chain of events (?) Please help take a look at the following PR #3083, hopefully this can then be closed or made more specific ? |
|
IIRC, the original idea was to make this support the "outer agent" - i.e normally the user, but could just as well be an agent, is this now supported ? |
Hi guys, I am working on this request_assistance feature github.com/jmikedupont2/ai-ticket |
Unless I am mistaken, this got implemented a while ago ? Please do join us on Discord to discuss this first before working on this any longer |
I am working on an extension of this idea to go much futher, i was commenting here to mark this thread for later review. |
Summary
I propose a new feature called "Semi-Active Mode," which enables the AI to run in a semi-automated manner, seeking user assistance when it encounters uncertainty, confusion, or ambiguity. This feature combines the benefits of Continuous Mode with the human-in-the-loop experience of Active Mode.
Background
Currently, there are two modes available:
To further enhance user engagement and provide a more flexible experience, I propose a new feature called "Semi-Active Mode."
Feature Description
In "Semi-Active Mode," the AI will:
This interaction pattern allows users to assist when needed while still benefiting from the AI's capabilities. It strikes a balance between full automation and active participation, fostering a collaborative environment between the user and the AI system.
Example Implementation
Here's an example implementation using LangChain's Human-As-A-Tool feature:
In this code, the AI agent seeks human assistance when it encounters uncertainty, allowing the user to guide as needed.
Benefits
Risks and Mitigations
This feature may slow down the overall AI operation due to the need for user input in certain situations. However, this trade-off is acceptable, considering the increased control and collaboration it provides.
Request for Comments
I would appreciate feedback from the community on this suggested feature. Please share your thoughts, suggestions, and any potential concerns you may have.
The text was updated successfully, but these errors were encountered: