Skip to content

Commit

Permalink
feat(chat-models): Support OpenRouter API in ChatOpenAI wrapper (#292)
Browse files Browse the repository at this point in the history
  • Loading branch information
davidmigloz committed Jan 13, 2024
1 parent 57699b3 commit c6e7e5b
Show file tree
Hide file tree
Showing 10 changed files with 311 additions and 18 deletions.
1 change: 1 addition & 0 deletions docs/_sidebar.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@
- [Google AI](/modules/model_io/models/chat_models/integrations/googleai.md)
- [Ollama](/modules/model_io/models/chat_models/integrations/ollama.md)
- [Mistral AI](/modules/model_io/models/chat_models/integrations/mistralai.md)
- [OpenRouter](/modules/model_io/models/chat_models/integrations/open_router.md)
- [Prem App](/modules/model_io/models/chat_models/integrations/prem.md)
- [Output parsers](/modules/model_io/output_parsers/output_parsers.md)
- [String output parser](/modules/model_io/output_parsers/string.md)
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
# OpenRouter

[OpenRouter](https://openrouter.ai/) offers a unified OpenAI-compatible API for a broad range of models.

You can also let users pay for their own models via their [OAuth PKCE](https://openrouter.ai/docs#oauth) flow.

You can consume OpenRouter API using the `ChatOpenAI` wrapper in the same way you would use the OpenAI API.

The only difference is that you need to change the base URL to `https://openrouter.ai/api/v1`:

```dart
final chatModel = ChatOpenAI(
apiKey: openRouterApiKey,
baseUrl: 'https://openrouter.ai/api/v1',
defaultOptions: const ChatOpenAIOptions(
model: 'mistralai/mistral-small',
),
);
```

OpenRouter allows you to specify an optional `HTTP-Referer` header to identify your app and make it discoverable to users on openrouter.ai. You can also include an optional `X-Title` header to set or modify the title of your app.

```dart
final chatModel = ChatOpenAI(
apiKey: openRouterApiKey,
baseUrl: 'https://openrouter.ai/api/v1',
headers: {
'HTTP-Referer': 'com.myapp',
'X-Title': 'OpenRouterTest',
},
defaultOptions: const ChatOpenAIOptions(
model: 'mistralai/mistral-small',
),
);
```

## Invoke

```dart
final openRouterApiKey = Platform.environment['OPEN_ROUTER_API_KEY'];
final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(
ChatMessageType.system,
'You are a helpful assistant that translates {input_language} to {output_language}.',
),
(ChatMessageType.human, '{text}'),
]);
final chatModel = ChatOpenAI(
apiKey: openRouterApiKey,
baseUrl: 'https://openrouter.ai/api/v1',
defaultOptions: const ChatOpenAIOptions(
model: 'mistralai/mistral-small',
),
);
final chain = promptTemplate | chatModel | const StringOutputParser();
final res = await chain.invoke({
'input_language': 'English',
'output_language': 'French',
'text': 'I love programming.',
});
print(res);
// -> 'J'aime la programmation.'
```

## Stream

```dart
final openRouterApiKey = Platform.environment['OPEN_ROUTER_API_KEY'];
final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(
ChatMessageType.system,
'You are a helpful assistant that replies only with numbers '
'in order without any spaces or commas',
),
(ChatMessageType.human, 'List the numbers from 1 to {max_num}'),
]);
final chatModel = ChatOpenAI(
apiKey: openRouterApiKey,
baseUrl: 'https://openrouter.ai/api/v1',
defaultOptions: const ChatOpenAIOptions(
model: 'mistralai/mistral-small',
),
);
final chain = promptTemplate.pipe(chatModel).pipe(const StringOutputParser());
final stream = chain.stream({'max_num': '9'});
await stream.forEach(print);
// 123
// 456789
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
// ignore_for_file: avoid_print
import 'dart:io';

import 'package:langchain/langchain.dart';
import 'package:langchain_openai/langchain_openai.dart';

void main(final List<String> arguments) async {
await _openRouter();
await _openRouterStreaming();
}

Future<void> _openRouter() async {
final openRouterApiKey = Platform.environment['OPEN_ROUTER_API_KEY'];

final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(
ChatMessageType.system,
'You are a helpful assistant that translates {input_language} to {output_language}.',
),
(ChatMessageType.human, '{text}'),
]);

final chatModel = ChatOpenAI(
apiKey: openRouterApiKey,
baseUrl: 'https://openrouter.ai/api/v1',
defaultOptions: const ChatOpenAIOptions(
model: 'mistralai/mistral-small',
),
);

final chain = promptTemplate | chatModel | const StringOutputParser();

final res = await chain.invoke({
'input_language': 'English',
'output_language': 'French',
'text': 'I love programming.',
});
print(res);
// -> 'J'aime la programmation.'
}

Future<void> _openRouterStreaming() async {
final openRouterApiKey = Platform.environment['OPEN_ROUTER_API_KEY'];

final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(
ChatMessageType.system,
'You are a helpful assistant that replies only with numbers '
'in order without any spaces or commas',
),
(ChatMessageType.human, 'List the numbers from 1 to {max_num}'),
]);

final chatModel = ChatOpenAI(
apiKey: openRouterApiKey,
baseUrl: 'https://openrouter.ai/api/v1',
defaultOptions: const ChatOpenAIOptions(
model: 'mistralai/mistral-small',
),
);

final chain = promptTemplate.pipe(chatModel).pipe(const StringOutputParser());

final stream = chain.stream({'max_num': '9'});
await stream.forEach(print);
// 123
// 456789
}
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ class HomeScreenCubit extends Cubit<HomeScreenState> {

final llm = ChatOpenAI(
apiKey: apiKey,
baseUrl: baseUrl,
baseUrl: baseUrl ?? '',
);

final result = await llm([ChatMessage.humanText(query)]);
Expand Down
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
export 'chat_openai.dart';
export 'models/models.dart';
export 'openai.dart';
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ import 'package:http/http.dart' as http;
import 'package:langchain/langchain.dart';
import 'package:langchain_tiktoken/langchain_tiktoken.dart';
import 'package:openai_dart/openai_dart.dart';
import 'package:uuid/uuid.dart';

import 'models/mappers.dart';
import 'models/models.dart';
Expand All @@ -22,6 +23,9 @@ import 'models/models.dart';
/// - [Completions guide](https://platform.openai.com/docs/guides/gpt/chat-completions-api)
/// - [Completions API docs](https://platform.openai.com/docs/api-reference/chat)
///
/// You can also use this wrapper to consume OpenAI-compatible APIs like
/// [OpenRouter](https://openrouter.ai) or [One API](https://github.com/songquanpeng/one-api).
///
/// ### Call options
///
/// You can configure the parameters that will be used when calling the
Expand Down Expand Up @@ -181,7 +185,7 @@ class ChatOpenAI extends BaseChatModel<ChatOpenAIOptions> {
ChatOpenAI({
final String? apiKey,
final String? organization,
final String? baseUrl,
final String baseUrl = 'https://api.openai.com/v1',
final Map<String, String>? headers,
final Map<String, dynamic>? queryParams,
final http.Client? client,
Expand Down Expand Up @@ -221,6 +225,9 @@ class ChatOpenAI extends BaseChatModel<ChatOpenAIOptions> {
/// https://github.com/mvitlov/tiktoken/blob/master/lib/tiktoken.dart
String? encoding;

/// A UUID generator.
late final Uuid _uuid = const Uuid();

/// Set or replace the API key.
set apiKey(final String value) => _client.apiKey = value;

Expand All @@ -238,7 +245,7 @@ class ChatOpenAI extends BaseChatModel<ChatOpenAIOptions> {
final completion = await _client.createChatCompletion(
request: _createChatCompletionRequest(messages, options: options),
);
return completion.toChatResult();
return completion.toChatResult(completion.id ?? _uuid.v4());
}

@override
Expand All @@ -253,7 +260,10 @@ class ChatOpenAI extends BaseChatModel<ChatOpenAIOptions> {
options: options,
),
)
.map((final completion) => completion.toChatResult());
.map(
(final completion) =>
completion.toChatResult(completion.id ?? _uuid.v4()),
);
}

@override
Expand Down Expand Up @@ -316,9 +326,7 @@ class ChatOpenAI extends BaseChatModel<ChatOpenAIOptions> {
final PromptValue promptValue, {
final ChatOpenAIOptions? options,
}) async {
final model =
options?.model ?? defaultOptions.model ?? throwNullModelError();
return _getTiktoken(model).encode(promptValue.toString());
return _getTiktoken().encode(promptValue.toString());
}

@override
Expand All @@ -328,7 +336,7 @@ class ChatOpenAI extends BaseChatModel<ChatOpenAIOptions> {
}) async {
final model =
options?.model ?? defaultOptions.model ?? throwNullModelError();
final tiktoken = _getTiktoken(model);
final tiktoken = _getTiktoken();
final messages = promptValue.toChatMessages();

// Ref: https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
Expand All @@ -355,9 +363,9 @@ class ChatOpenAI extends BaseChatModel<ChatOpenAIOptions> {
tokensPerMessage = 3;
tokensPerName = 1;
} else {
throw UnimplementedError(
'countTokens not supported for model $model',
);
// For other models we assume gpt-3.5-turbo-0613
tokensPerMessage = 3;
tokensPerName = 1;
}
}

Expand Down Expand Up @@ -386,8 +394,10 @@ class ChatOpenAI extends BaseChatModel<ChatOpenAIOptions> {
}

/// Returns the tiktoken model to use for the given model.
Tiktoken _getTiktoken(final String model) {
return encoding != null ? getEncoding(encoding!) : encodingForModel(model);
Tiktoken _getTiktoken() {
return encoding != null
? getEncoding(encoding!)
: getEncoding('cl100k_base');
}

/// Closes the client and cleans up any resources associated with it.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ extension _ChatMessageContentMultiModalMapper on ChatMessageContentMultiModal {
/// Mapper for [CreateChatCompletionResponse].
extension CreateChatCompletionResponseMapper on CreateChatCompletionResponse {
/// Converts a [CreateChatCompletionResponse] to a [ChatResult].
ChatResult toChatResult() {
ChatResult toChatResult(final String id) {
return ChatResult(
id: id,
generations: choices
Expand All @@ -136,8 +136,8 @@ extension _ChatCompletionResponseChoiceMapper on ChatCompletionResponseChoice {
functionCall: message.functionCall?.toAIChatMessageFunctionCall(),
),
generationInfo: {
'index': index,
'finish_reason': finishReason,
'index': index ?? 0,
'finish_reason': finishReason ?? ChatCompletionFinishReason.stop,
},
);
}
Expand Down Expand Up @@ -214,7 +214,7 @@ extension ChatFunctionCallMapper on ChatFunctionCall {
extension CreateChatCompletionStreamResponseMapper
on CreateChatCompletionStreamResponse {
/// Converts a [CreateChatCompletionStreamResponse] to a [ChatResult].
ChatResult toChatResult() {
ChatResult toChatResult(final String id) {
return ChatResult(
generations: choices
.map((final choice) => choice.toChatGeneration())
Expand Down
1 change: 1 addition & 0 deletions packages/langchain_openai/pubspec.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ dependencies:
langchain_tiktoken: ^1.0.1
meta: ^1.9.1
openai_dart: ^0.1.3
uuid: ^4.0.0

dev_dependencies:
test: ^1.24.5
Loading

0 comments on commit c6e7e5b

Please sign in to comment.