-
Notifications
You must be signed in to change notification settings - Fork 44.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed #4229 #4278
Fixed #4229 #4278
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #4278 +/- ##
==========================================
+ Coverage 63.26% 63.79% +0.53%
==========================================
Files 74 74
Lines 3427 3447 +20
Branches 504 507 +3
==========================================
+ Hits 2168 2199 +31
+ Misses 1103 1079 -24
- Partials 156 169 +13
☔ View full report in Codecov by Sentry. |
Deployment failed with the following error:
|
Deployment failed with the following error:
|
Deployment failed with the following error:
|
Deployment failed with the following error:
|
Deployment failed with the following error:
|
Deployment failed with the following error:
|
Co-authored-by: k-boikov <[email protected]>
Resolves: #4229
Background
The defaults need to be adapted for people without access to GPT4 - or the corresponding functions won't work. The default model for people without GPT4 access should be cfg.fast_llm_model and not cfg.smart_llm_model.
Changes
Added check_model fn in configurator that sends a request to the OpenAI API using create_chat_completion with the model provided. If an InvalidRequestError is raised, it returns a hard-coded value of "gpt-3.5-turbo".
Documentation
Comments added to the function and test.
Test Plan
Checks if fn returns original model argument if no error.
Checks if fn returns "gpt-3.5-turbo" when InvalidRequestError is raised.
PR Quality Checklist
black .
andisort .
against my code to ensure it passes our linter.