Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make prompt parameters configurable #3375

Merged
merged 46 commits into from
May 17, 2023

Conversation

DGdev91
Copy link
Contributor

@DGdev91 DGdev91 commented Apr 27, 2023

Loading prompt constraints, resources and performance evaluation from a yaml file (default prompt_settings.yaml)
Can be set from .env (PROMPT_SETTINGS_FILE) or by commandline (--prompt-settings or -P)

potentially resolves #3954

Background

The main reason for proposed this changes is because they can help with with different LLM models, we talked about that in #25 #567 #2158.
They don't handle the prompts in the same way as GPT3.5/GPT4, and they often get confused. this way can be easy to create and share prompts made specifically for them.

Also, it can be useful for models made for different languages, like for example https://github.com/FreedomIntelligence/LLMZoo

....Or just for people who want to have more control on what AutoGPT can do without having to modify the code

Changes

Moved the hardcoded default prompt constraints, resources and performance evaluation form prompt.py to the new file prompt_settings.yaml, wich will be used as default file.

Added the new configuration variable PROMPT_SETTINGS_FILE in .env.template and modified config.py to handle it

Added the new file autogpt/config/prompt_config.py, wich contain the PromptConfig class, wich is initialized passing the file path and contains the datas from the configuration file

Moved the hardcoded default prompt constraints, resources and performance evaluation form prompt.py to the new file prompt_settings.yaml, wich will be used as default file.

Modified prompt.py to use the values from the PromptConfig instead of hardcoded datas

Modified cli.py, main.py and configurator.py to handle the new --prompt-settings / -P commandline args

Documentation

The new .env variable PROMPT_SETTINGS_FILE is described there, while the new --prompt-settings/-P comd line args are described both in cli.py and in usage.md.
I followed the same policy used for the ai.settings.yaml file

Test Plan

  • Start AutoGPT without modifying any configurations, should work just as before
  • Start AutoGPT using --prompt-settings (file) (ex. python -m autogpt -P prompt_settings_ex.yaml), where file doesn't exists or isn't valid. AutoGPT should give a validation error and stop
  • Start AutoGPT using after setting PROMPT_SETTINGS_FILE=(file), where file doesn't exists or isn't valid. AutoGPT should give a validation error and stop
  • Copy the prompt_settings.yaml file and change it a bit, while keeping it still valid. Run AutoGPT normally, it should still run as expected

To check if the prompt have actually changed, i also used those changes in my fork (see #2594) while connecting to https://github.com/keldenl/gpt-llama.cpp
I know it isn't something officially supported, but it's still a good and quick way to see what's going on, since the webservice prints the prompt on the standard output

PR Quality Checklist

    • My pull request is atomic and focuses on a single change.
    • I have thoroughly tested my changes with multiple different prompts.
    • I have considered potential risks and mitigations for my changes.
    • I have documented my changes clearly and comprehensively.
    • I have not snuck in any "extra" small tweaks changes

DGdev91 and others added 8 commits April 19, 2023 23:00
Making the openai base url and embedding dimension configurable, these are useful to integrate AutoGPT with other models, like LLaMA
Loading prompt constraints, resources and performance evaluation from a yaml file (default prompt_settings.yaml)
Can be set from .env (PROMPT_SETTINGS_FILE) or by commandline (--prompt-settings or -P)
@vercel
Copy link

vercel bot commented Apr 27, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Ignored Deployment
Name Status Preview Comments Updated (UTC)
docs ⬜️ Ignored (Inspect) Visit Preview May 17, 2023 5:07pm

@github-actions github-actions bot added the conflicts Automatically applied to PRs with merge conflicts label Apr 27, 2023
@github-actions
Copy link
Contributor

This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.

@github-actions github-actions bot removed the conflicts Automatically applied to PRs with merge conflicts label Apr 27, 2023
@github-actions
Copy link
Contributor

Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.

@vercel vercel bot temporarily deployed to Preview April 27, 2023 02:16 Inactive
@github-actions github-actions bot added the conflicts Automatically applied to PRs with merge conflicts label Apr 27, 2023
@github-actions
Copy link
Contributor

This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.

@github-actions github-actions bot removed the conflicts Automatically applied to PRs with merge conflicts label Apr 27, 2023
@github-actions
Copy link
Contributor

Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.

@github-actions
Copy link
Contributor

This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.

@github-actions github-actions bot added the conflicts Automatically applied to PRs with merge conflicts label Apr 27, 2023
@DGdev91
Copy link
Contributor Author

DGdev91 commented May 16, 2023

Would this also handle the current issue where the GPT4 default fails for people who only have access to GPT3.5 (e.g. #4229) ? Or more generally: how does this deal with multiple LLMs as part of a single profile ?

I've not really clear what are you referring to, but it doesn't seem related

@Boostrix
Copy link
Contributor

sry, you're right, I was meaning to respond in the issue where someone is working on abstracting out the OpenAI API

@Boostrix Boostrix requested a review from Pwuts May 16, 2023 13:33
@github-actions
Copy link
Contributor

This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size

@lc0rp
Copy link
Contributor

lc0rp commented May 17, 2023

@DGdev91 if you edit the description and reference 3954 it'll link them and transition the issue.

Adding "potentially resolves #3954" should do it.

@DGdev91
Copy link
Contributor Author

DGdev91 commented May 17, 2023

@DGdev91 if you edit the description and reference 3954 it'll link them and transition the issue.

Adding "potentially resolves #3954" should do it.

K, done

@Wladastic
Copy link
Contributor

Please update the branch.
Otherwise looks very good! :)

@DGdev91
Copy link
Contributor Author

DGdev91 commented May 17, 2023

Please update the branch. Otherwise looks very good! :)

Done

@github-actions
Copy link
Contributor

This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size

@vercel vercel bot temporarily deployed to Preview May 17, 2023 14:50 Inactive
@lc0rp lc0rp added the ready-for-maintainer Catalyst has decided this PR looks ready for a maintainer to merge label May 17, 2023
@github-actions
Copy link
Contributor

This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size

@k-boikov k-boikov merged commit 42a5a0c into Significant-Gravitas:master May 17, 2023
@waynehamadi
Copy link
Contributor

NICE thanks @DGdev91 BIG BIG BIG are you on discord ? Join us if not.

@waynehamadi
Copy link
Contributor

@katmai
Copy link

katmai commented May 17, 2023

good change

@katmai
Copy link

katmai commented May 17, 2023

this doesn't seem to cover the hardcoded prompts in autogpt/config/ai_config.py

@DGdev91 DGdev91 deleted the configurable_prompts branch May 17, 2023 23:29
@DGdev91
Copy link
Contributor Author

DGdev91 commented May 17, 2023

NICE thanks @DGdev91 BIG BIG BIG are you on discord ? Join us if not.

Yes, i'm already there. DGdev91#8061

this doesn't seem to cover the hardcoded prompts in autogpt/config/ai_config.py

True. And there's also still some hardcoded stuff in generator.py.
Also, i don't really like the autogpt/prompts/default_prompts.py, it can be included too

But i wanted to keep this PR simple, and eventually add other stuff to the new yaml config file later

@katmai
Copy link

katmai commented May 18, 2023

NICE thanks @DGdev91 BIG BIG BIG are you on discord ? Join us if not.

Yes, i'm already there. DGdev91#8061

this doesn't seem to cover the hardcoded prompts in autogpt/config/ai_config.py

True. And there's also still some hardcoded stuff in generator.py. Also, i don't really like the autogpt/prompts/default_prompts.py, it can be included too

But i wanted to keep this PR simple, and eventually add other stuff to the new yaml config file later

i have found another prompt that is hardcoded

autogpt/memory_management/summary_memory.py

thank you for doing this, it's really great to have the prompts configurable. all of them.

ppetermann pushed a commit to ppetermann/Auto-GPT that referenced this pull request May 22, 2023
@ntindle ntindle added this to the v0.4.0 Release milestone May 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
function: prompt generation prompt issue ready-for-maintainer Catalyst has decided this PR looks ready for a maintainer to merge size/xl
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Prompt Profiles
10 participants