Skip to content

Commit

Permalink
feat: Add support for ChatFirebaseVertexAI (#422)
Browse files Browse the repository at this point in the history
  • Loading branch information
davidmigloz authored May 14, 2024
1 parent b60492d commit 8d0786b
Show file tree
Hide file tree
Showing 121 changed files with 5,730 additions and 15 deletions.
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,3 @@
.dart_tool/
/pubspec.lock
.vscode/

1 change: 1 addition & 0 deletions docs/_sidebar.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@
- [LLMChain](/modules/model_io/models/chat_models/how_to/llm_chain.md)
- Integrations
- [OpenAI](/modules/model_io/models/chat_models/integrations/openai.md)
- [Firebase Vertex AI](/modules/model_io/models/chat_models/integrations/firebase_vertex_ai.md)
- [GCP Vertex AI](/modules/model_io/models/chat_models/integrations/gcp_vertex_ai.md)
- [Google AI](/modules/model_io/models/chat_models/integrations/googleai.md)
- [Ollama](/modules/model_io/models/chat_models/integrations/ollama.md)
Expand Down
5 changes: 5 additions & 0 deletions docs/modules/model_io/models/chat_models/how_to/tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,11 @@

> We use the term "tool calling" interchangeably with "function calling". Although function calling is sometimes meant to refer to invocations of a single function, we treat all models as though they can return multiple tool or function calls in each message.
> Tool calling is currently supported by:
> - [`ChatOpenAI`](/modules/model_io/models/chat_models/integrations/openai.md)
> - [`ChatFirebaseVertexAI`](/modules/model_io/models/chat_models/integrations/firebase_vertex_ai.md)
> - [`ChatGoogleGenerativeAI`](/modules/model_io/models/chat_models/integrations/googleai.md)
Tool calling allows a model to respond to a given prompt by generating output that matches a user-defined schema. While the name implies that the model is performing some action, this is actually not the case! The model is coming up with the arguments to a tool, and actually running the tool (or not) is up to the user - for example, if you want to extract output matching some schema from unstructured text, you could give the model an “extraction” tool that takes parameters matching the desired schema, then treat the generated output as your final result.

A tool call includes an `id` of the call, the `name` of the tool to call, and a map with the `arguments` to pass to the tool. The `arguments` map is structured like `{argument_name: argument_value}`.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
# Vertex AI for Firebase

The [Vertex AI Gemini API](https://firebase.google.com/docs/vertex-ai) gives you access to the latest generative AI models from Google: the Gemini models. If you need to call the Vertex AI Gemini API directly from your mobile or web app you can use the `ChatFirebaseVertexAI` class instead of the [`ChatVertexAI`](/modules/model_io/models/chat_models/integrations/gcp_vertex_ai.md) class which is designed to be used on the server-side.

`ChatFirebaseVertexAI` is built specifically for use with mobile and web apps, offering security options against unauthorized clients as well as integrations with other Firebase services.

## Key capabilities

- **Multimodal input**: The Gemini models are multimodal, so prompts sent to the Gemini API can include text, images (even PDFs), video, and audio.
- **Growing suite of capabilities**: You can call the Gemini API directly from your mobile or web app, build an AI chat experience, use function calling, and more.
- **Security for production apps**: Use Firebase App Check to protect the Vertex AI Gemini API from abuse by unauthorized clients.
- **Robust infrastructure**: Take advantage of scalable infrastructure that's built for use with mobile and web apps, like managing structured data with Firebase database offerings (like Cloud Firestore) and dynamically setting run-time configurations with Firebase Remote Config.

## Setup

### 1. Set up a Firebase project

Check the [Firebase documentation](https://firebase.google.com/docs/vertex-ai/get-started?platform=flutter) for the latest information on how to set up the Vertex AI for Firebase in your Firebase project.

In summary, you need to:
1. Upgrade your billing plan to the Blaze pay-as-you-go pricing plan.
2. Enable the required APIs (`aiplatform.googleapis.com` and `firebaseml.googleapis.com`).
3. Integrate the Firebase SDK into your app (if you haven't already).
4. Recommended: Enable Firebase App Check to protect the Vertex AI Gemini API from abuse by unauthorized clients.

### 2. Add the LangChain.dart Google package

Add the `langchain_google` package to your `pubspec.yaml` file.

```yaml
dependencies:
langchain: {version}
langchain_google: {version}
```
Internally, `langchain_google` uses the [`firebase_vertexai`](https://pub.dev/packages/firebase_vertexai) SDK to interact with the Vertex AI for Firebase API.

### 3. Initialize your Firebase app

```yaml
await Firebase.initializeApp();
```

### 4. Call the Vertex AI Gemini API

```dart
final chatModel = ChatFirebaseVertexAI();
final chatPrompt = ChatPromptTemplate.fromTemplates([
(ChatMessageType.system, 'You are a helpful assistant that translates {input_language} to {output_language}.'),
(ChatMessageType.human, 'Text to translate:\n{text}'),
]);
final chain = chatPrompt | chatModel | StringOutputParser();
final res = await chain.invoke({
'input_language': 'English',
'output_language': 'French',
'text': 'I love programming.',
});
print(res);
// -> 'J'adore programmer.'
```

> Check out the [sample project](https://github.com/davidmigloz/langchain_dart/tree/main/packages/langchain_firebase/example) to see a complete project using Vertex AI for Firebase.

## Available models

The following models are available:
- `gemini-1.0-pro`
* text -> text model
* Max input token: 30720
* Max output tokens: 2048
- `gemini-1.0-pro-vision`:
* text / image -> text model
* Max input token: 12288
* Max output tokens: 4096
- `gemini-1.5-pro-preview-0514`:
* text / image / audio -> text model
* Max input token: 1048576
* Max output tokens: 8192
- `gemini-1.5-flash-preview-0514`:
* text / image / audio -> text model
* Max input token: 1048576
* Max output tokens: 8192

Mind that this list may not be up-to-date. Refer to the [documentation](https://firebase.google.com/docs/vertex-ai/gemini-models) for the updated list.

## Multimodal support

```dart
final chatModel = ChatFirebaseVertexAI(
defaultOptions: ChatFirebaseVertexAIOptions(
model: 'gemini-1.5-pro-preview-0409',
),
);
final res = await chatModel.invoke(
PromptValue.chat([
ChatMessage.human(
ChatMessageContent.multiModal([
ChatMessageContent.text('What fruit is this?'),
ChatMessageContent.image(
mimeType: 'image/jpeg',
data: base64.encode(
await File('./bin/assets/apple.jpeg').readAsBytes(),
),
),
]),
),
]),
);
print(res.output.content);
// -> 'That is an apple.'
```

## Streaming

```dart
final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(ChatMessageType.system, 'You are a helpful assistant that replies only with numbers in order without any spaces or commas.'),
(ChatMessageType.human, 'List the numbers from 1 to {max_num}'),
]);
final chatModel = ChatFirebaseVertexAI(
defaultOptions: ChatFirebaseVertexAIOptions(
model: 'gemini-1.5-pro-preview-0409',
),
);
final chain = promptTemplate.pipe(chatModel).pipe(StringOutputParser());
final stream = chain.stream({'max_num': '30'});
await stream.forEach(print);
// 1
// 2345678910111213
// 1415161718192021
// 222324252627282930
```

## Tool calling

`ChatGoogleGenerativeAI` supports tool calling.

Check the [docs](https://langchaindart.com/#/modules/model_io/models/chat_models/how_to/tools) for more information on how to use tools.

Example:
```dart
const tool = ToolSpec(
name: 'get_current_weather',
description: 'Get the current weather in a given location',
inputJsonSchema: {
'type': 'object',
'properties': {
'location': {
'type': 'string',
'description': 'The city and state, e.g. San Francisco, CA',
},
},
'required': ['location'],
},
);
final chatModel = ChatFirebaseVertexAI(
defaultOptions: ChatFirebaseVertexAIOptions(
model: 'gemini-1.5-pro-preview-0409',
temperature: 0,
tools: [tool],
),
);
final res = await model.invoke(
PromptValue.string('What’s the weather like in Boston and Madrid right now in celsius?'),
);
```

## Prevent abuse with Firebase App Check

You can use Firebase App Check to protect the Vertex AI Gemini API from abuse by unauthorized clients. Check the [Firebase documentation](https://firebase.google.com/docs/vertex-ai/app-check) for more information.

## Locations

When initializing the Vertex AI service, you can optionally specify a location in which to run the service and access a model. If you don't specify a location, the default is us-central1. See the list of [available locations](https://firebase.google.com/docs/vertex-ai/locations?platform=flutter#available-locations).

```dart
final chatModel = ChatFirebaseVertexAI(
location: 'us-central1',
);
```

## Alternatives

- [`ChatVertexAI`](/modules/model_io/models/chat_models/integrations/gcp_vertex_ai.md): Use this class to call the Vertex AI Gemini API from the server-side.
- [`ChatGoogleGenerativeAI`](/modules/model_io/models/chat_models/integrations/googleai.md): Use this class to call the "Google AI" version of the Gemini API that provides free-of-charge access (within limits and where available). This API is not intended for production use but for experimentation and prototyping. After you're familiar with how a Gemini API works, migrate to the Vertex AI for Firebase, which have many additional features important for mobile and web apps, like protecting the API from abuse using Firebase App Check.
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,38 @@ await stream.forEach(print);
// 2345678910111213
// 1415161718192021
// 222324252627282930
```

## Tool calling

`ChatGoogleGenerativeAI` supports tool calling.

Check the [docs](https://langchaindart.com/#/modules/model_io/models/chat_models/how_to/tools) for more information on how to use tools.

chatModel.close();
Example:
```dart
const tool = ToolSpec(
name: 'get_current_weather',
description: 'Get the current weather in a given location',
inputJsonSchema: {
'type': 'object',
'properties': {
'location': {
'type': 'string',
'description': 'The city and state, e.g. San Francisco, CA',
},
},
'required': ['location'],
},
);
final chatModel = ChatGoogleGenerativeAI(
defaultOptions: ChatGoogleGenerativeAIOptions(
model: 'gemini-1.5-pro-latest',
temperature: 0,
tools: [tool],
),
);
final res = await model.invoke(
PromptValue.string('What’s the weather like in Boston and Madrid right now in celsius?'),
);
```
4 changes: 4 additions & 0 deletions melos.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ repository: https://github.com/davidmigloz/langchain_dart
packages:
- examples/*
- packages/*
- packages/**/example

command:
version:
Expand Down Expand Up @@ -31,6 +32,9 @@ command:
csv: ^6.0.0
equatable: ^2.0.5
fetch_client: ^1.0.2
firebase_app_check: ^0.2.2+5
firebase_core: ^2.31.0
firebase_vertexai: ^0.1.0
flutter_bloc: ^8.1.5
flutter_markdown: ^0.6.22
freezed_annotation: ^2.4.1
Expand Down
2 changes: 2 additions & 0 deletions packages/langchain/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ LangChain.dart has a modular design that allows developers to import only the co
| [langchain_community](https://pub.dev/packages/langchain_community) | [![langchain_community](https://img.shields.io/pub/v/langchain_community.svg)](https://pub.dev/packages/langchain_community) | Third-party integrations (without specific packages) and community-contributed components |
| [langchain_openai](https://pub.dev/packages/langchain_openai) | [![langchain_openai](https://img.shields.io/pub/v/langchain_openai.svg)](https://pub.dev/packages/langchain_openai) | OpenAI integration (GPT-3.5 Turbo, GPT-4, GPT-4 Turbo, Embeddings, Tools, Vision, DALL·E 3, etc.) and OpenAI Compatible services (TogetherAI, Anyscale, OpenRouter, One API, Groq, Llamafile, GPT4All, etc.) |
| [langchain_google](https://pub.dev/packages/langchain_google) | [![langchain_google](https://img.shields.io/pub/v/langchain_google.svg)](https://pub.dev/packages/langchain_google) | Google integration (GoogleAI, VertexAI, Gemini, PaLM 2, Embeddings, Vector Search, etc.) |
| [langchain_firebase](https://pub.dev/packages/langchain_firebase) | [![langchain_firebase](https://img.shields.io/pub/v/langchain_firebase.svg)](https://pub.dev/packages/langchain_firebase) | Firebase integration (VertexAI for Firebase (Gemini API), etc.) |
| [langchain_ollama](https://pub.dev/packages/langchain_ollama) | [![langchain_ollama](https://img.shields.io/pub/v/langchain_ollama.svg)](https://pub.dev/packages/langchain_ollama) | Ollama integration (Llama 3, Phi-3, WizardLM-2, Mistral 7B, Gemma, CodeGemma, Command R, LLaVA, DBRX, Qwen 1.5, Dolphin, DeepSeek Coder, Vicuna, Orca, etc.) |
| [langchain_mistralai](https://pub.dev/packages/langchain_mistralai) | [![langchain_mistralai](https://img.shields.io/pub/v/langchain_mistralai.svg)](https://pub.dev/packages/langchain_mistralai) | Mistral AI integration (Mistral-7B, Mixtral 8x7B, Mixtral 8x22B, Mistral Small, Mistral Large, embeddings, etc.). |
| [langchain_pinecone](https://pub.dev/packages/langchain_pinecone) | [![langchain_pinecone](https://img.shields.io/pub/v/langchain_pinecone.svg)](https://pub.dev/packages/langchain_pinecone) | Pinecone vector database integration |
Expand All @@ -71,6 +72,7 @@ Functionality provided by each integration package:
| [langchain_community](https://pub.dev/packages/langchain_community) | | | | | | | |
| [langchain_openai](https://pub.dev/packages/langchain_openai) |||| ||||
| [langchain_google](https://pub.dev/packages/langchain_google) ||||| | | |
| [langchain_firebase](https://pub.dev/packages/langchain_firebase) | || | | | | |
| [langchain_ollama](https://pub.dev/packages/langchain_ollama) |||| | | | |
| [langchain_mistralai](https://pub.dev/packages/langchain_mistralai) | ||| | | | |
| [langchain_pinecone](https://pub.dev/packages/langchain_pinecone) | | | || | | |
Expand Down
9 changes: 9 additions & 0 deletions packages/langchain_core/lib/src/chat_models/types.dart
Original file line number Diff line number Diff line change
Expand Up @@ -740,6 +740,7 @@ final class ChatToolChoiceAuto extends ChatToolChoice {
/// {@template chat_tool_choice_forced}
/// The model is forced to to call the specified tool.
/// {@endtemplate}
@immutable
final class ChatToolChoiceForced extends ChatToolChoice {
/// {@macro chat_tool_choice_forced}
const ChatToolChoiceForced({
Expand All @@ -748,6 +749,14 @@ final class ChatToolChoiceForced extends ChatToolChoice {

/// The name of the tool to call.
final String name;

@override
bool operator ==(covariant final ChatToolChoiceForced other) =>
identical(this, other) ||
runtimeType == other.runtimeType && name == other.name;

@override
int get hashCode => name.hashCode;
}

/// {@template chat_example}
Expand Down
6 changes: 6 additions & 0 deletions packages/langchain_firebase/.gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
example/*/.metadata linguist-generated=true
example/**/Flutter/GeneratedPluginRegistrant.swift linguist-generated=true
example/**/Runner.xcodeproj/ linguist-generated=true
example/**/Runner.xcworkspace/ linguist-generated=true
example/**/flutter/CMakeLists.txt linguist-generated=true
example/**/flutter/generated_* linguist-generated=true
48 changes: 43 additions & 5 deletions packages/langchain_firebase/.gitignore
Original file line number Diff line number Diff line change
@@ -1,7 +1,45 @@
# https://dart.dev/guides/libraries/private-files
# Created by `dart pub`
# Miscellaneous
*.class
*.log
*.pyc
*.swp
.DS_Store
.atom/
.buildlog/
.history
.svn/
migrate_working_dir/

# IntelliJ related
*.iml
*.ipr
*.iws
.idea/

# The .vscode folder contains launch configuration and tasks you configure in
# VS Code which you may wish to be included in version control, so this line
# is commented out by default.
#.vscode/

# Flutter/Dart/Pub related
**/doc/api/
**/ios/Flutter/.last_build_id
.dart_tool/
.flutter-plugins
.flutter-plugins-dependencies
.packages
.pub-cache/
.pub/
/build/
/example/build/

# Symbolication related
app.*.symbols

# Obfuscation related
app.*.map.json

# Avoid committing pubspec.lock for library packages; see
# https://dart.dev/guides/libraries/private-files#pubspeclock.
pubspec.lock
# Android Studio will place build artifacts here
/android/app/debug
/android/app/profile
/android/app/release
18 changes: 18 additions & 0 deletions packages/langchain_firebase/example/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# firebase_vertexai_example

Example project to show how to use the Firebase integration module for LangChain.dart.

This example project is a port of the original [firebase_vertexai_example](https://github.com/firebase/flutterfire/tree/master/packages/firebase_vertexai/firebase_vertexai/example) using LangChain.dart.

## Getting Started

This project is a starting point for a Flutter application.

A few resources to get you started if this is your first Flutter project:

- [Lab: Write your first Flutter app](https://docs.flutter.dev/get-started/codelab)
- [Cookbook: Useful Flutter samples](https://docs.flutter.dev/cookbook)

For help getting started with Flutter development, view the
[online documentation](https://docs.flutter.dev/), which offers tutorials,
samples, guidance on mobile development, and a full API reference.
1 change: 1 addition & 0 deletions packages/langchain_firebase/example/analysis_options.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
include: ../../../analysis_options.yaml
Loading

0 comments on commit 8d0786b

Please sign in to comment.