Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor(api): remove deprecated endpoints #621

Merged
merged 1 commit into from
Jan 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .stats.yml
Original file line number Diff line number Diff line change
@@ -1 +1 @@
configured_endpoints: 57
configured_endpoints: 51
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -272,8 +272,8 @@ a subclass of `APIError` will be thrown:
<!-- prettier-ignore -->
```ts
async function main() {
const fineTune = await openai.fineTunes
.create({ training_file: 'file-XGinujblHPwGLSztz8cPS8XY' })
const job = await openai.fineTuning.jobs
.create({ model: 'gpt-3.5-turbo', training_file: 'file-abc123' })
.catch((err) => {
if (err instanceof OpenAI.APIError) {
console.log(err.status); // 400
Expand Down
26 changes: 0 additions & 26 deletions api.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,16 +48,6 @@ Methods:

- <code title="post /chat/completions">client.chat.completions.<a href="./src/resources/chat/completions.ts">create</a>({ ...params }) -> ChatCompletion</code>

# Edits

Types:

- <code><a href="./src/resources/edits.ts">Edit</a></code>

Methods:

- <code title="post /edits">client.edits.<a href="./src/resources/edits.ts">create</a>({ ...params }) -> Edit</code>

# Embeddings

Types:
Expand Down Expand Up @@ -169,22 +159,6 @@ Methods:
- <code title="post /fine_tuning/jobs/{fine_tuning_job_id}/cancel">client.fineTuning.jobs.<a href="./src/resources/fine-tuning/jobs.ts">cancel</a>(fineTuningJobId) -> FineTuningJob</code>
- <code title="get /fine_tuning/jobs/{fine_tuning_job_id}/events">client.fineTuning.jobs.<a href="./src/resources/fine-tuning/jobs.ts">listEvents</a>(fineTuningJobId, { ...params }) -> FineTuningJobEventsPage</code>

# FineTunes

Types:

- <code><a href="./src/resources/fine-tunes.ts">FineTune</a></code>
- <code><a href="./src/resources/fine-tunes.ts">FineTuneEvent</a></code>
- <code><a href="./src/resources/fine-tunes.ts">FineTuneEventsListResponse</a></code>

Methods:

- <code title="post /fine-tunes">client.fineTunes.<a href="./src/resources/fine-tunes.ts">create</a>({ ...params }) -> FineTune</code>
- <code title="get /fine-tunes/{fine_tune_id}">client.fineTunes.<a href="./src/resources/fine-tunes.ts">retrieve</a>(fineTuneId) -> FineTune</code>
- <code title="get /fine-tunes">client.fineTunes.<a href="./src/resources/fine-tunes.ts">list</a>() -> FineTunesPage</code>
- <code title="post /fine-tunes/{fine_tune_id}/cancel">client.fineTunes.<a href="./src/resources/fine-tunes.ts">cancel</a>(fineTuneId) -> FineTune</code>
- <code title="get /fine-tunes/{fine_tune_id}/events">client.fineTunes.<a href="./src/resources/fine-tunes.ts">listEvents</a>(fineTuneId, { ...params }) -> FineTuneEventsListResponse</code>

# Beta

## Chat
Expand Down
16 changes: 0 additions & 16 deletions src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -142,15 +142,13 @@ export class OpenAI extends Core.APIClient {

completions: API.Completions = new API.Completions(this);
chat: API.Chat = new API.Chat(this);
edits: API.Edits = new API.Edits(this);
embeddings: API.Embeddings = new API.Embeddings(this);
files: API.Files = new API.Files(this);
images: API.Images = new API.Images(this);
audio: API.Audio = new API.Audio(this);
moderations: API.Moderations = new API.Moderations(this);
models: API.Models = new API.Models(this);
fineTuning: API.FineTuning = new API.FineTuning(this);
fineTunes: API.FineTunes = new API.FineTunes(this);
beta: API.Beta = new API.Beta(this);

protected override defaultQuery(): Core.DefaultQuery | undefined {
Expand Down Expand Up @@ -251,10 +249,6 @@ export namespace OpenAI {
export import ChatCompletionCreateParamsNonStreaming = API.ChatCompletionCreateParamsNonStreaming;
export import ChatCompletionCreateParamsStreaming = API.ChatCompletionCreateParamsStreaming;

export import Edits = API.Edits;
export import Edit = API.Edit;
export import EditCreateParams = API.EditCreateParams;

export import Embeddings = API.Embeddings;
export import CreateEmbeddingResponse = API.CreateEmbeddingResponse;
export import Embedding = API.Embedding;
Expand Down Expand Up @@ -289,16 +283,6 @@ export namespace OpenAI {

export import FineTuning = API.FineTuning;

export import FineTunes = API.FineTunes;
export import FineTune = API.FineTune;
export import FineTuneEvent = API.FineTuneEvent;
export import FineTuneEventsListResponse = API.FineTuneEventsListResponse;
export import FineTunesPage = API.FineTunesPage;
export import FineTuneCreateParams = API.FineTuneCreateParams;
export import FineTuneListEventsParams = API.FineTuneListEventsParams;
export import FineTuneListEventsParamsNonStreaming = API.FineTuneListEventsParamsNonStreaming;
export import FineTuneListEventsParamsStreaming = API.FineTuneListEventsParamsStreaming;

export import Beta = API.Beta;

export import FunctionDefinition = API.FunctionDefinition;
Expand Down
4 changes: 2 additions & 2 deletions src/resources/chat/completions.ts
Original file line number Diff line number Diff line change
Expand Up @@ -594,7 +594,7 @@ export interface ChatCompletionTool {
* will not call a function and instead generates a message. `auto` means the model
* can pick between generating a message or calling a function. Specifying a
* particular function via
* `{"type: "function", "function": {"name": "my_function"}}` forces the model to
* `{"type": "function", "function": {"name": "my_function"}}` forces the model to
* call that function.
*
* `none` is the default when no functions are present. `auto` is the default if
Expand Down Expand Up @@ -807,7 +807,7 @@ export interface ChatCompletionCreateParamsBase {
* will not call a function and instead generates a message. `auto` means the model
* can pick between generating a message or calling a function. Specifying a
* particular function via
* `{"type: "function", "function": {"name": "my_function"}}` forces the model to
* `{"type": "function", "function": {"name": "my_function"}}` forces the model to
* call that function.
*
* `none` is the default when no functions are present. `auto` is the default if
Expand Down
24 changes: 6 additions & 18 deletions src/resources/completions.ts
Original file line number Diff line number Diff line change
Expand Up @@ -131,18 +131,7 @@ export interface CompletionCreateParamsBase {
* [Model overview](https://platform.openai.com/docs/models/overview) for
* descriptions of them.
*/
model:
| (string & {})
| 'babbage-002'
| 'davinci-002'
| 'gpt-3.5-turbo-instruct'
| 'text-davinci-003'
| 'text-davinci-002'
| 'text-davinci-001'
| 'code-davinci-002'
| 'text-curie-001'
| 'text-babbage-001'
| 'text-ada-001';
model: (string & {}) | 'gpt-3.5-turbo-instruct' | 'davinci-002' | 'babbage-002';

/**
* The prompt(s) to generate completions for, encoded as a string, array of
Expand Down Expand Up @@ -186,12 +175,11 @@ export interface CompletionCreateParamsBase {
*
* Accepts a JSON object that maps tokens (specified by their token ID in the GPT
* tokenizer) to an associated bias value from -100 to 100. You can use this
* [tokenizer tool](/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to
* convert text to token IDs. Mathematically, the bias is added to the logits
* generated by the model prior to sampling. The exact effect will vary per model,
* but values between -1 and 1 should decrease or increase likelihood of selection;
* values like -100 or 100 should result in a ban or exclusive selection of the
* relevant token.
* [tokenizer tool](/tokenizer?view=bpe) to convert text to token IDs.
* Mathematically, the bias is added to the logits generated by the model prior to
* sampling. The exact effect will vary per model, but values between -1 and 1
* should decrease or increase likelihood of selection; values like -100 or 100
* should result in a ban or exclusive selection of the relevant token.
*
* As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token
* from being generated.
Expand Down
109 changes: 0 additions & 109 deletions src/resources/edits.ts

This file was deleted.

Loading