Skip to content

Commit

Permalink
feat(chat-models): Migrate ChatOllama to Ollama chat API and add mult…
Browse files Browse the repository at this point in the history
…i-modal (#279)
  • Loading branch information
davidmigloz authored Dec 26, 2023
1 parent 76e1a29 commit c5de7e1
Show file tree
Hide file tree
Showing 25 changed files with 451 additions and 231 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ jobs:
with:
channel: 'stable'
flutter-version: '3.16.0'
cache: true
cache: false # Disable Flutter caching temporarily
cache-key: 'flutter-:os:-:channel:-:version:-:arch:-:hash:'

- name: Set-up Flutter
Expand Down
98 changes: 72 additions & 26 deletions docs/modules/model_io/models/chat_models/integrations/ollama.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ For a complete list of supported models and model variants, see the [Ollama mode

## Setup

Rollow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:
Follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:

1. Download and install [Ollama](https://ollama.ai)
2. Fetch a model via `ollama pull <model family>`
Expand All @@ -21,25 +21,22 @@ Rollow [these instructions](https://github.com/jmorganca/ollama) to set up and r
## Usage

```dart
final promptTemplate = ChatPromptTemplate.fromTemplates([
(
ChatMessageType.system,
'You are a helpful assistant that translates {input_language} to {output_language}.',
),
(ChatMessageType.human, '{text}'),
]);
final chatModel = ChatOllama(
defaultOptions: const ChatOllamaOptions(
defaultOptions: ChatOllamaOptions(
model: 'llama2',
temperature: 0,
),
);
const template =
'You are a helpful assistant that translates {input_language} to {output_language}.';
final systemMessagePrompt =
SystemChatMessagePromptTemplate.fromTemplate(template);
const humanTemplate = '{text}';
final humanMessagePrompt =
HumanChatMessagePromptTemplate.fromTemplate(humanTemplate);
final chatPrompt = ChatPromptTemplate.fromPromptMessages(
[systemMessagePrompt, humanMessagePrompt],
);
final chain = chatPrompt | chatModel | const StringOutputParser();
final chain = promptTemplate | chatModel | StringOutputParser();
final res = await chain.invoke({
'input_language': 'English',
Expand All @@ -53,24 +50,21 @@ print(res);
## Streaming

```dart
final promptTemplate = ChatPromptTemplate.fromPromptMessages([
SystemChatMessagePromptTemplate.fromTemplate(
'You are a helpful assistant that replies only with numbers '
'in order without any spaces or commas',
),
HumanChatMessagePromptTemplate.fromTemplate(
'List the numbers from 1 to {max_num}',
final promptTemplate = ChatPromptTemplate.fromTemplates([
(
ChatMessageType.system,
'You are a helpful assistant that replies only with numbers '
'in order without any spaces or commas',
),
(ChatMessageType.human, 'List the numbers from 1 to {max_num}'),
]);
final chat = ChatOllama(
defaultOptions: const ChatOllamaOptions(
model: 'llama2:latest',
defaultOptions: ChatOllamaOptions(
model: 'llama2',
temperature: 0,
),
);
const stringOutputParser = StringOutputParser<AIChatMessage>();
final chain = promptTemplate.pipe(chat).pipe(stringOutputParser);
final chain = promptTemplate.pipe(chat).pipe(StringOutputParser());
final stream = chain.stream({'max_num': '9'});
await stream.forEach(print);
Expand All @@ -80,3 +74,55 @@ await stream.forEach(print);
// ..
// 9
```

## JSON mode

You can enforce the model to produce a JSON output, useful for extracting structured data.

```dart
final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(ChatMessageType.system, 'Respond using JSON'),
(ChatMessageType.human, '{question}'),
]);
final chat = ChatOllama(
defaultOptions: const ChatOllamaOptions(
model: 'llama2',
temperature: 0,
format: OllamaResponseFormat.json,
),
);
final chain = promptTemplate.pipe(chat);
final res = await chain.invoke(
{'question': 'What color is the sky at different times of the day?'},
);
print(res.firstOutputAsString);
// {"morning": {"sky": "pink", "sun": "rise"}, "daytime": {"sky": "blue", "sun": "high"}, "afternoon": ...}
```

## Multimodal support

Ollama has support for multi-modal LLMs, such as [bakllava](https://ollama.ai/library/bakllava) and [llava](https://ollama.ai/library/llava).

```dart
final chatModel = ChatOllama(
defaultOptions: const ChatOllamaOptions(
model: 'llava',
temperature: 0,
),
);
final prompt = ChatMessage.human(
ChatMessageContent.multiModal([
ChatMessageContent.text('What fruit is this?'),
ChatMessageContent.image(
data: base64.encode(
await File('./bin/assets/apple.jpeg').readAsBytes(),
),
),
]),
);
final res = await chatModel.invoke(PromptValue.chat([prompt]));
print(res.firstOutputAsString);
// -> 'An Apple'
```
35 changes: 18 additions & 17 deletions docs/modules/model_io/models/chat_models/integrations/openai.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,20 +9,22 @@ OpenAI [models](https://platform.openai.com/docs/models) using the Chat API.
```dart
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];
final promptTemplate = ChatPromptTemplate.fromTemplates([
(
ChatMessageType.system,
'You are a helpful assistant that translates {input_language} to {output_language}.',
),
(ChatMessageType.human, '{text}'),
]);
final chatModel = ChatOpenAI(
apiKey: openaiApiKey,
defaultOptions: const ChatOpenAIOptions(
defaultOptions: ChatOpenAIOptions(
temperature: 0,
),
);
const template = 'You are a helpful assistant that translates {input_language} to {output_language}.';
final systemMessagePrompt = SystemChatMessagePromptTemplate.fromTemplate(template);
const humanTemplate = '{text}';
final humanMessagePrompt = HumanChatMessagePromptTemplate.fromTemplate(humanTemplate);
final chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);
final chain = chatPrompt | chatModel | const StringOutputParser();
final chain = promptTemplate | chatModel | StringOutputParser();
final res = await chain.invoke({
'input_language': 'English',
Expand All @@ -38,19 +40,18 @@ print(res);
```dart
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];
final promptTemplate = ChatPromptTemplate.fromPromptMessages([
SystemChatMessagePromptTemplate.fromTemplate(
'You are a helpful assistant that replies only with numbers '
'in order without any spaces or commas',
),
HumanChatMessagePromptTemplate.fromTemplate(
'List the numbers from 1 to {max_num}',
final promptTemplate = ChatPromptTemplate.fromTemplates([
(
ChatMessageType.system,
'You are a helpful assistant that replies only with numbers '
'in order without any spaces or commas',
),
(ChatMessageType.human, 'List the numbers from 1 to {max_num}'),
]);
final chat = ChatOpenAI(apiKey: openaiApiKey);
const stringOutputParser = StringOutputParser<AIChatMessage>();
final chain = promptTemplate.pipe(chat).pipe(stringOutputParser);
final chain = promptTemplate.pipe(chat).pipe(StringOutputParser());
final stream = chain.stream({'max_num': '9'});
await stream.forEach(print);
Expand Down
Original file line number Diff line number Diff line change
@@ -1,32 +1,34 @@
// ignore_for_file: avoid_print, avoid_redundant_argument_values
import 'dart:convert';
import 'dart:io';

import 'package:langchain/langchain.dart';
import 'package:langchain_ollama/langchain_ollama.dart';

void main(final List<String> arguments) async {
await _chatOllama();
await _chatOllamaStreaming();
await _chatOllamaJsonMode();
await _chatOllamaMultimodal();
}

Future<void> _chatOllama() async {
final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(
ChatMessageType.system,
'You are a helpful assistant that translates {input_language} to {output_language}.',
),
(ChatMessageType.human, '{text}'),
]);

final chatModel = ChatOllama(
defaultOptions: const ChatOllamaOptions(
model: 'llama2',
temperature: 0,
),
);

const template =
'You are a helpful assistant that translates {input_language} to {output_language}.';
final systemMessagePrompt =
SystemChatMessagePromptTemplate.fromTemplate(template);
const humanTemplate = '{text}';
final humanMessagePrompt =
HumanChatMessagePromptTemplate.fromTemplate(humanTemplate);
final chatPrompt = ChatPromptTemplate.fromPromptMessages(
[systemMessagePrompt, humanMessagePrompt],
);

final chain = chatPrompt | chatModel | const StringOutputParser();
final chain = promptTemplate | chatModel | const StringOutputParser();

final res = await chain.invoke({
'input_language': 'English',
Expand All @@ -38,24 +40,21 @@ Future<void> _chatOllama() async {
}

Future<void> _chatOllamaStreaming() async {
final promptTemplate = ChatPromptTemplate.fromPromptMessages([
SystemChatMessagePromptTemplate.fromTemplate(
final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(
ChatMessageType.system,
'You are a helpful assistant that replies only with numbers '
'in order without any spaces or commas',
),
HumanChatMessagePromptTemplate.fromTemplate(
'List the numbers from 1 to {max_num}',
'in order without any spaces or commas',
),
(ChatMessageType.human, 'List the numbers from 1 to {max_num}'),
]);
final chat = ChatOllama(
defaultOptions: const ChatOllamaOptions(
model: 'llama2',
temperature: 0,
),
);
const stringOutputParser = StringOutputParser<AIChatMessage>();

final chain = promptTemplate.pipe(chat).pipe(stringOutputParser);
final chain = promptTemplate.pipe(chat).pipe(const StringOutputParser());

final stream = chain.stream({'max_num': '9'});
await stream.forEach(print);
Expand All @@ -65,3 +64,47 @@ Future<void> _chatOllamaStreaming() async {
// ..
// 9
}

Future<void> _chatOllamaJsonMode() async {
final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(ChatMessageType.system, 'Respond using JSON'),
(ChatMessageType.human, '{question}'),
]);
final chat = ChatOllama(
defaultOptions: const ChatOllamaOptions(
model: 'llama2',
temperature: 0,
format: OllamaResponseFormat.json,
),
);

final chain = promptTemplate.pipe(chat);

final res = await chain.invoke(
{'question': 'What color is the sky at different times of the day?'},
);
print(res.firstOutputAsString);
// {"morning": {"sky": "pink", "sun": "rise"}, "daytime": {"sky": "blue", "sun": "high"}, "afternoon": ...}
}

Future<void> _chatOllamaMultimodal() async {
final chatModel = ChatOllama(
defaultOptions: const ChatOllamaOptions(
model: 'llava',
temperature: 0,
),
);
final prompt = ChatMessage.human(
ChatMessageContent.multiModal([
ChatMessageContent.text('What fruit is this?'),
ChatMessageContent.image(
data: base64.encode(
await File('./bin/assets/apple.jpeg').readAsBytes(),
),
),
]),
);
final res = await chatModel.invoke(PromptValue.chat([prompt]));
print(res.firstOutputAsString);
// -> 'An Apple'
}
Original file line number Diff line number Diff line change
Expand Up @@ -14,25 +14,22 @@ void main(final List<String> arguments) async {
Future<void> _chatOpenAI() async {
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];

final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(
ChatMessageType.system,
'You are a helpful assistant that translates {input_language} to {output_language}.',
),
(ChatMessageType.human, '{text}'),
]);

final chatModel = ChatOpenAI(
apiKey: openaiApiKey,
defaultOptions: const ChatOpenAIOptions(
temperature: 0,
),
);

const template =
'You are a helpful assistant that translates {input_language} to {output_language}.';
final systemMessagePrompt =
SystemChatMessagePromptTemplate.fromTemplate(template);
const humanTemplate = '{text}';
final humanMessagePrompt =
HumanChatMessagePromptTemplate.fromTemplate(humanTemplate);
final chatPrompt = ChatPromptTemplate.fromPromptMessages(
[systemMessagePrompt, humanMessagePrompt],
);

final chain = chatPrompt | chatModel | const StringOutputParser();
final chain = promptTemplate | chatModel | const StringOutputParser();

final res = await chain.invoke({
'input_language': 'English',
Expand All @@ -46,19 +43,18 @@ Future<void> _chatOpenAI() async {
Future<void> _chatOpenAIStreaming() async {
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];

final promptTemplate = ChatPromptTemplate.fromPromptMessages([
SystemChatMessagePromptTemplate.fromTemplate(
final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(
ChatMessageType.system,
'You are a helpful assistant that replies only with numbers '
'in order without any spaces or commas',
),
HumanChatMessagePromptTemplate.fromTemplate(
'List the numbers from 1 to {max_num}',
'in order without any spaces or commas',
),
(ChatMessageType.human, 'List the numbers from 1 to {max_num}'),
]);

final chat = ChatOpenAI(apiKey: openaiApiKey);
const stringOutputParser = StringOutputParser<AIChatMessage>();

final chain = promptTemplate.pipe(chat).pipe(stringOutputParser);
final chain = promptTemplate.pipe(chat).pipe(const StringOutputParser());

final stream = chain.stream({'max_num': '9'});
await stream.forEach(print);
Expand Down
2 changes: 1 addition & 1 deletion examples/docs_examples/pubspec_overrides.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# melos_managed_dependency_overrides: chromadb,langchain,langchain_chroma,langchain_openai,openai_dart,langchain_ollama,ollama_dart,langchain_mistralai,mistralai_dart,googleai_dart,langchain_google,vertex_ai
# melos_managed_dependency_overrides: chromadb,googleai_dart,langchain,langchain_chroma,langchain_google,langchain_mistralai,langchain_ollama,langchain_openai,mistralai_dart,ollama_dart,openai_dart,vertex_ai
dependency_overrides:
chromadb:
path: ../../packages/chromadb
Expand Down
Loading

0 comments on commit c5de7e1

Please sign in to comment.