Skip to content

Commit

Permalink
docs: integrations/providers/ update (#14315)
Browse files Browse the repository at this point in the history
- added missed provider files (from `integrations/Callbacks`
- updated notebooks: added links; updated into consistent formats
  • Loading branch information
leo-gan authored Dec 5, 2023
1 parent 6607cc6 commit 0f02e94
Show file tree
Hide file tree
Showing 13 changed files with 163 additions and 58 deletions.
4 changes: 1 addition & 3 deletions docs/docs/integrations/callbacks/argilla.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,6 @@
"source": [
"# Argilla\n",
"\n",
"![Argilla - Open-source data platform for LLMs](https://argilla.io/og.png)\n",
"\n",
">[Argilla](https://argilla.io/) is an open-source data curation platform for LLMs.\n",
"> Using Argilla, everyone can build robust language models through faster data curation \n",
"> using both human and machine feedback. We provide support for each step in the MLOps cycle, \n",
Expand Down Expand Up @@ -410,7 +408,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.12"
},
"vscode": {
"interpreter": {
Expand Down
21 changes: 7 additions & 14 deletions docs/docs/integrations/callbacks/context.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,9 @@
"source": [
"# Context\n",
"\n",
"![Context - User Analytics for LLM Powered Products](https://with.context.ai/langchain.png)\n",
">[Context](https://context.ai/) provides user analytics for LLM-powered products and features.\n",
"\n",
"[Context](https://context.ai/) provides user analytics for LLM powered products and features.\n",
"\n",
"With Context, you can start understanding your users and improving their experiences in less than 30 minutes.\n",
"\n"
"With `Context`, you can start understanding your users and improving their experiences in less than 30 minutes.\n"
]
},
{
Expand Down Expand Up @@ -89,11 +86,9 @@
"metadata": {},
"source": [
"## Usage\n",
"### Using the Context callback within a chat model\n",
"\n",
"The Context callback handler can be used to directly record transcripts between users and AI assistants.\n",
"### Context callback within a chat model\n",
"\n",
"#### Example"
"The Context callback handler can be used to directly record transcripts between users and AI assistants."
]
},
{
Expand Down Expand Up @@ -132,7 +127,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using the Context callback within Chains\n",
"### Context callback within Chains\n",
"\n",
"The Context callback handler can also be used to record the inputs and outputs of chains. Note that intermediate steps of the chain are not recorded - only the starting inputs and final outputs.\n",
"\n",
Expand All @@ -149,9 +144,7 @@
">handler = ContextCallbackHandler(token)\n",
">chat = ChatOpenAI(temperature=0.9, callbacks=[callback])\n",
">chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[callback])\n",
">```\n",
"\n",
"#### Example"
">```\n"
]
},
{
Expand Down Expand Up @@ -203,7 +196,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
},
"vscode": {
"interpreter": {
Expand Down
12 changes: 7 additions & 5 deletions docs/docs/integrations/callbacks/infino.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,14 @@
"source": [
"# Infino\n",
"\n",
">[Infino](https://github.com/infinohq/infino) is a scalable telemetry store designed for logs, metrics, and traces. Infino can function as a standalone observability solution or as the storage layer in your observability stack.\n",
"\n",
"This example shows how one can track the following while calling OpenAI and ChatOpenAI models via `LangChain` and [Infino](https://github.com/infinohq/infino):\n",
"\n",
"* prompt input,\n",
"* response from `ChatGPT` or any other `LangChain` model,\n",
"* latency,\n",
"* errors,\n",
"* prompt input\n",
"* response from `ChatGPT` or any other `LangChain` model\n",
"* latency\n",
"* errors\n",
"* number of tokens consumed"
]
},
Expand Down Expand Up @@ -454,7 +456,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
}
},
"nbformat": 4,
Expand Down
22 changes: 11 additions & 11 deletions docs/docs/integrations/callbacks/labelstudio.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,24 +4,24 @@
"cell_type": "markdown",
"metadata": {
"collapsed": true,
"jupyter": {
"outputs_hidden": true
},
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"# Label Studio\n",
"\n",
"<div>\n",
"<img src=\"https://labelstudio-pub.s3.amazonaws.com/lc/open-source-data-labeling-platform.png\" width=\"400\"/>\n",
"</div>\n",
"\n",
"Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.\n",
">[Label Studio](https://labelstud.io/guide/get_started) is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.\n",
"\n",
"In this guide, you will learn how to connect a LangChain pipeline to Label Studio to:\n",
"In this guide, you will learn how to connect a LangChain pipeline to `Label Studio` to:\n",
"\n",
"- Aggregate all input prompts, conversations, and responses in a single LabelStudio project. This consolidates all the data in one place for easier labeling and analysis.\n",
"- Aggregate all input prompts, conversations, and responses in a single `Label Studio` project. This consolidates all the data in one place for easier labeling and analysis.\n",
"- Refine prompts and responses to create a dataset for supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) scenarios. The labeled data can be used to further train the LLM to improve its performance.\n",
"- Evaluate model responses through human feedback. LabelStudio provides an interface for humans to review and provide feedback on model responses, allowing evaluation and iteration."
"- Evaluate model responses through human feedback. `Label Studio` provides an interface for humans to review and provide feedback on model responses, allowing evaluation and iteration."
]
},
{
Expand Down Expand Up @@ -362,9 +362,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "labelops",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "labelops"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
Expand All @@ -376,9 +376,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 1
"nbformat_minor": 4
}
2 changes: 1 addition & 1 deletion docs/docs/integrations/callbacks/llmonitor.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# LLMonitor

[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
>[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
<video controls width='100%' >
<source src='https://llmonitor.com/videos/demo-annotated.mp4'/>
Expand Down
19 changes: 9 additions & 10 deletions docs/docs/integrations/callbacks/promptlayer.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,10 @@
"source": [
"# PromptLayer\n",
"\n",
"![PromptLayer](https://promptlayer.com/text_logo.png)\n",
"\n",
"[PromptLayer](https://promptlayer.com) is a an LLM observability platform that lets you visualize requests, version prompts, and track usage. In this guide we will go over how to setup the `PromptLayerCallbackHandler`. \n",
">[PromptLayer](https://promptlayer.com) is a an LLM observability platform that lets you visualize requests, version prompts, and track usage. In this guide we will go over how to setup the `PromptLayerCallbackHandler`. \n",
"\n",
"While PromptLayer does have LLMs that integrate directly with LangChain (e.g. [`PromptLayerOpenAI`](https://python.langchain.com/docs/integrations/llms/promptlayer_openai)), this callback is the recommended way to integrate PromptLayer with LangChain.\n",
"While `PromptLayer` does have LLMs that integrate directly with LangChain (e.g. [`PromptLayerOpenAI`](https://python.langchain.com/docs/integrations/llms/promptlayer_openai)), this callback is the recommended way to integrate PromptLayer with LangChain.\n",
"\n",
"See [our docs](https://docs.promptlayer.com/languages/langchain) for more information."
]
Expand Down Expand Up @@ -51,7 +50,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Usage\n",
"## Usage\n",
"\n",
"Getting started with `PromptLayerCallbackHandler` is fairly simple, it takes two optional arguments:\n",
"1. `pl_tags` - an optional list of strings that will be tracked as tags on PromptLayer.\n",
Expand All @@ -63,7 +62,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Simple OpenAI Example\n",
"## Simple OpenAI Example\n",
"\n",
"In this simple example we use `PromptLayerCallbackHandler` with `ChatOpenAI`. We add a PromptLayer tag named `chatopenai`"
]
Expand Down Expand Up @@ -99,7 +98,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### GPT4All Example"
"## GPT4All Example"
]
},
{
Expand All @@ -125,9 +124,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Full Featured Example\n",
"## Full Featured Example\n",
"\n",
"In this example we unlock more of the power of PromptLayer.\n",
"In this example, we unlock more of the power of `PromptLayer`.\n",
"\n",
"PromptLayer allows you to visually create, version, and track prompt templates. Using the [Prompt Registry](https://docs.promptlayer.com/features/prompt-registry), we can programmatically fetch the prompt template called `example`.\n",
"\n",
Expand Down Expand Up @@ -182,7 +181,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "base",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
Expand All @@ -196,7 +195,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8 (default, Apr 13 2021, 12:59:45) \n[Clang 10.0.0 ]"
"version": "3.10.12"
},
"vscode": {
"interpreter": {
Expand Down
15 changes: 8 additions & 7 deletions docs/docs/integrations/callbacks/sagemaker_tracking.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,15 @@
"source": [
"# SageMaker Tracking\n",
"\n",
"This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability:\n",
">[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models. \n",
"\n",
">[Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability of `Amazon SageMaker` that lets you organize, track, compare and evaluate ML experiments and model versions.\n",
"\n",
"This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into `SageMaker Experiments`. Here, we use different scenarios to showcase the capability:\n",
"* **Scenario 1**: *Single LLM* - A case where a single LLM model is used to generate output based on a given prompt.\n",
"* **Scenario 2**: *Sequential Chain* - A case where a sequential chain of two LLM models is used.\n",
"* **Scenario 3**: *Agent with Tools (Chain of Thought)* - A case where multiple tools (search and math) are used in addition to an LLM.\n",
"\n",
"[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models. \n",
"\n",
"[Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability of Amazon SageMaker that lets you organize, track, compare and evaluate ML experiments and model versions.\n",
"\n",
"In this notebook, we will create a single experiment to log the prompts from each scenario."
]
Expand Down Expand Up @@ -899,9 +900,9 @@
],
"instance_type": "ml.t3.large",
"kernelspec": {
"display_name": "conda_pytorch_p310",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "conda_pytorch_p310"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
Expand All @@ -913,7 +914,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.10"
"version": "3.10.12"
}
},
"nbformat": 4,
Expand Down
15 changes: 8 additions & 7 deletions docs/docs/integrations/callbacks/trubrics.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,13 @@
"source": [
"# Trubrics\n",
"\n",
"![Trubrics](https://miro.medium.com/v2/resize:fit:720/format:webp/1*AhYbKO-v8F4u3hx2aDIqKg.png)\n",
"\n",
"[Trubrics](https://trubrics.com) is an LLM user analytics platform that lets you collect, analyse and manage user\n",
"prompts & feedback on AI models. In this guide we will go over how to setup the `TrubricsCallbackHandler`. \n",
">[Trubrics](https://trubrics.com) is an LLM user analytics platform that lets you collect, analyse and manage user\n",
"prompts & feedback on AI models.\n",
">\n",
">Check out [Trubrics repo](https://github.com/trubrics/trubrics-sdk) for more information on `Trubrics`.\n",
"\n",
"Check out [our repo](https://github.com/trubrics/trubrics-sdk) for more information on Trubrics."
"In this guide, we will go over how to set up the `TrubricsCallbackHandler`. \n"
]
},
{
Expand Down Expand Up @@ -347,9 +348,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "langchain",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "langchain"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
Expand All @@ -361,7 +362,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.12"
}
},
"nbformat": 4,
Expand Down
20 changes: 20 additions & 0 deletions docs/docs/integrations/providers/context.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Context

>[Context](https://context.ai/) provides user analytics for LLM-powered products and features.
## Installation and Setup

We need to install the `context-python` Python package:

```bash
pip install context-python
```


## Callbacks

See a [usage example](/docs/integrations/callbacks/context).

```python
from langchain.callbacks import ContextCallbackHandler
```
23 changes: 23 additions & 0 deletions docs/docs/integrations/providers/labelstudio.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Label Studio


>[Label Studio](https://labelstud.io/guide/get_started) is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.
## Installation and Setup

See the [Label Studio installation guide](https://labelstud.io/guide/install) for installation options.

We need to install the `label-studio` and `label-studio-sdk-python` Python packages:

```bash
pip install label-studio label-studio-sdk
```


## Callbacks

See a [usage example](/docs/integrations/callbacks/labelstudio).

```python
from langchain.callbacks import LabelStudioCallbackHandler
```
22 changes: 22 additions & 0 deletions docs/docs/integrations/providers/llmonitor.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# LLMonitor

>[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
## Installation and Setup

Create an account on [llmonitor.com](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs), then copy your new app's `tracking id`.

Once you have it, set it as an environment variable by running:

```bash
export LLMONITOR_APP_ID="..."
```


## Callbacks

See a [usage example](/docs/integrations/callbacks/llmonitor).

```python
from langchain.callbacks import LLMonitorCallbackHandler
```
Loading

0 comments on commit 0f02e94

Please sign in to comment.