Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add settings for custom base url #2594

Merged
merged 35 commits into from
Jun 10, 2023
Merged

Conversation

DGdev91
Copy link
Contributor

@DGdev91 DGdev91 commented Apr 19, 2023

Making the openai base url and embedding dimension configurable, these are useful to integrate AutoGPT with other models, like LLaMA

Background

This makes AutoGPT capable of connecting to custom openai-like APIs like [keldenl](https://github.com/keldenl/gpt-llama.cpp), and use other models, like LLaMA and derivates.

see also #25 #567 #2158

Changes

Added OPENAI_API_BASE_URL and EMBED_DIM to .env_template and loaded them in config.py, making sure OPENAI_API_BASE_URL would be ignored if USE_AZURE is True.

Also, modified the files in autogpt/memory to use the value in EMBED_DIM instead of 1536 (wich is still the default)

UPDATE: embed_dim settings isn't needed anymore

Documentation

I added an explanation of what those new configurations do in the .env_template file, following the comments on other configurations

Test Plan

Tested it by using gpt-llama.cpp on my machine, and setting OPENAI_API_BASE_URL to the API url in my .env file.
I used Vicuna 13B, so i also set EMBED_DIM to 5120
For this test, i also set OPENAI_API_KEY to the model's path (it's an "hack" made by gpt-llama.cpp to get the model's path)

PR Quality Checklist

    • My pull request is atomic and focuses on a single change.
    • I have thoroughly tested my changes with multiple different prompts.
    • I have considered potential risks and mitigations for my changes.
    • I have documented my changes clearly and comprehensively.
    • I have not snuck in any "extra" small tweaks changes

Making the openai base url and embedding dimension configurable, these are useful to integrate AutoGPT with other models, like LLaMA
Copy link

@keldenl keldenl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, this is good for flexibility – LGTM

@MarkSchmidty
Copy link

LGTM

@Sorann753
Copy link

A name like LLM_API_BASE_URL instead of OPENAI_API_BASE_URL might be more fitting since it allows us to not always use OpenAI's API

@MarkSchmidty
Copy link

It's still using the OpenAI API, just not their endpoint, even if the model behind it isn't an OpenAI model.

@cryptocake
Copy link

LGTM 👍

khalek1414
khalek1414 previously approved these changes Apr 23, 2023
keldenl
keldenl previously approved these changes Apr 24, 2023
Copy link

@keldenl keldenl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reapproving it because this still LGTM and is awesome

.env.template Outdated Show resolved Hide resolved
.env.template Outdated Show resolved Hide resolved
autogpt/config/config.py Outdated Show resolved Hide resolved
autogpt/memory/local.py Outdated Show resolved Hide resolved
autogpt/memory/redismem.py Outdated Show resolved Hide resolved
@ntindle ntindle self-assigned this Apr 24, 2023
@codecov
Copy link

codecov bot commented Apr 24, 2023

Codecov Report

Patch coverage: 60.00% and project coverage change: -8.24 ⚠️

Comparison is base (f8dfedf) 49.65% compared to head (96e7650) 41.41%.

❗ Current head 96e7650 differs from pull request most recent head b7defd2. Consider uploading reports for the commit b7defd2 to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #2594      +/-   ##
==========================================
- Coverage   49.65%   41.41%   -8.24%     
==========================================
  Files          64       63       -1     
  Lines        3021     3011      -10     
  Branches      505      495      -10     
==========================================
- Hits         1500     1247     -253     
- Misses       1401     1698     +297     
+ Partials      120       66      -54     
Impacted Files Coverage Δ
autogpt/memory/milvus.py 3.38% <ø> (ø)
autogpt/memory/pinecone.py 28.57% <0.00%> (ø)
autogpt/config/config.py 74.02% <33.33%> (-2.14%) ⬇️
autogpt/memory/redismem.py 31.34% <66.66%> (+2.11%) ⬆️
autogpt/memory/local.py 96.07% <100.00%> (+0.16%) ⬆️

... and 17 files with indirect coverage changes

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@DGdev91 DGdev91 dismissed stale reviews from keldenl and khalek1414 via b7defd2 April 24, 2023 14:04
cryptocake
cryptocake previously approved these changes Apr 24, 2023
Copy link

@cryptocake cryptocake left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Proposed changes look good. Embedding dimensions should not be hardcoded and need to be manipulated if Auto-GPT ever wishes to support local LLM's.

autogpt/memory/local.py Outdated Show resolved Hide resolved
@github-actions github-actions bot added the conflicts Automatically applied to PRs with merge conflicts label Apr 25, 2023
@github-actions
Copy link
Contributor

This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.

@Pwuts
Copy link
Member

Pwuts commented Jun 7, 2023

Personal note: sorry it has been taking so long. We're swamped in a huge re-arch and a ton of PRs and we're only now figuring out a more scalable team workflow. Thanks for staying with us :)

@vercel
Copy link

vercel bot commented Jun 7, 2023

Deployment failed with the following error:

Resource is limited - try again in 2 hours (more than 100, code: "api-deployments-free-per-day").

@DGdev91
Copy link
Contributor Author

DGdev91 commented Jun 7, 2023

Pending Maintainer Disucssion

@Pwuts just approved, if it's ok for you too it can be merged

@Pwuts Pwuts self-assigned this Jun 9, 2023
@Pwuts Pwuts dismissed ntindle’s stale review June 9, 2023 16:04

@Pwuts go for merging the change

~@ntindle in #maintainers-chat

@Auto-GPT-Bot
Copy link
Contributor

You changed AutoGPT's behaviour. The cassettes have been updated and will be merged to the submodule when this Pull Request gets merged.

@Auto-GPT-Bot
Copy link
Contributor

You changed AutoGPT's behaviour. The cassettes have been updated and will be merged to the submodule when this Pull Request gets merged.

@Pwuts
Copy link
Member

Pwuts commented Jun 9, 2023

@merwanehamadi I wouldn't know how this PR would change behavior.. Can you figure out why it's giving this comment?

@waynehamadi
Copy link
Contributor

@Pwuts if the PR changed the behaviour once, currently it will keep sending the behaviour label. So this PR changed the behaviour in the past, most likely
Options:

  • remove behaviour label change
  • fix this label this is tricky to do. We need to identify whether the last cassette in the cassette file is being used

@vercel vercel bot temporarily deployed to Preview June 9, 2023 23:07 Inactive
@Auto-GPT-Bot
Copy link
Contributor

You changed AutoGPT's behaviour. The cassettes have been updated and will be merged to the submodule when this Pull Request gets merged.

@Pwuts Pwuts merged commit 6ff8478 into Significant-Gravitas:master Jun 10, 2023
@Pwuts Pwuts added this to the v0.4.1 Release milestone Jun 11, 2023
@Pwuts
Copy link
Member

Pwuts commented Jul 9, 2023

We're aware that this functionality was broken in #4803 and are working to fix it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.