diff --git a/docs/docs/expression_language/how_to/functions.ipynb b/docs/docs/expression_language/how_to/functions.ipynb index 67e36114c0968..d1849ff0a7b3e 100644 --- a/docs/docs/expression_language/how_to/functions.ipynb +++ b/docs/docs/expression_language/how_to/functions.ipynb @@ -1,5 +1,16 @@ { "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "sidebar_position: 2\n", + "title: \"RunnableLambda: Run Custom Functions\"\n", + "keywords: [RunnableLambda, LCEL]\n", + "---" + ] + }, { "cell_type": "markdown", "id": "fbc4bf6e", @@ -7,7 +18,7 @@ "source": [ "# Run custom functions\n", "\n", - "You can use arbitrary functions in the pipeline\n", + "You can use arbitrary functions in the pipeline.\n", "\n", "Note that all inputs to these functions need to be a SINGLE argument. If you have a function that accepts multiple arguments, you should write a wrapper that accepts a single input and unpacks it into multiple argument." ] diff --git a/docs/docs/expression_language/how_to/map.ipynb b/docs/docs/expression_language/how_to/map.ipynb index 72df445bc1218..71c22a0b0e2d0 100644 --- a/docs/docs/expression_language/how_to/map.ipynb +++ b/docs/docs/expression_language/how_to/map.ipynb @@ -1,77 +1,136 @@ { "cells": [ + { + "cell_type": "markdown", + "id": "e2596041-9b76-4e74-836f-e6235086bbf0", + "metadata": {}, + "source": [ + "---\n", + "sidebar_position: 0\n", + "title: \"RunnableParallel: Manipulating data\"\n", + "keywords: [RunnableParallel, RunnableMap, LCEL]\n", + "---" + ] + }, { "cell_type": "markdown", "id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2", "metadata": {}, "source": [ - "# Parallelize steps\n", + "# Manipulating inputs & output\n", "\n", - "RunnableParallel (aka. RunnableMap) makes it easy to execute multiple Runnables in parallel, and to return the output of these Runnables as a map." + "RunnableParallel can be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence.\n", + "\n", + "Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key.\n", + "\n", + "\n" ] }, { "cell_type": "code", - "execution_count": 2, - "id": "7e1873d6-d4b6-43ac-96a1-edcf178201e0", + "execution_count": 3, + "id": "267d1460-53c1-4fdb-b2c3-b6a1eb7fccff", "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "{'joke': AIMessage(content=\"Why don't bears wear shoes? \\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False),\n", - " 'poem': AIMessage(content=\"In woodland depths, bear prowls with might,\\nSilent strength, nature's sovereign, day and night.\", additional_kwargs={}, example=False)}" + "'Harrison worked at Kensho.'" ] }, - "execution_count": 2, + "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain.chat_models import ChatOpenAI\n", + "from langchain.embeddings import OpenAIEmbeddings\n", "from langchain.prompts import ChatPromptTemplate\n", - "from langchain.schema.runnable import RunnableParallel\n", + "from langchain.schema.output_parser import StrOutputParser\n", + "from langchain.schema.runnable import RunnablePassthrough\n", + "from langchain.vectorstores import FAISS\n", "\n", + "vectorstore = FAISS.from_texts(\n", + " [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n", + ")\n", + "retriever = vectorstore.as_retriever()\n", + "template = \"\"\"Answer the question based only on the following context:\n", + "{context}\n", + "\n", + "Question: {question}\n", + "\"\"\"\n", + "prompt = ChatPromptTemplate.from_template(template)\n", "model = ChatOpenAI()\n", - "joke_chain = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\") | model\n", - "poem_chain = (\n", - " ChatPromptTemplate.from_template(\"write a 2-line poem about {topic}\") | model\n", + "\n", + "retrieval_chain = (\n", + " {\"context\": retriever, \"question\": RunnablePassthrough()}\n", + " | prompt\n", + " | model\n", + " | StrOutputParser()\n", ")\n", "\n", - "map_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)\n", + "retrieval_chain.invoke(\"where did harrison work?\")" + ] + }, + { + "cell_type": "markdown", + "id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1", + "metadata": {}, + "source": [ + "::: {.callout-tip}\n", + "Note that when composing a RunnableParallel with another Runnable we don't even need to wrap our dictionary in the RunnableParallel class — the type conversion is handled for us. In the context of a chain, these are equivalent:\n", + ":::\n", "\n", - "map_chain.invoke({\"topic\": \"bear\"})" + "```\n", + "{\"context\": retriever, \"question\": RunnablePassthrough()}\n", + "```\n", + "\n", + "```\n", + "RunnableParallel({\"context\": retriever, \"question\": RunnablePassthrough()})\n", + "```\n", + "\n", + "```\n", + "RunnableParallel(context=retriever, question=RunnablePassthrough())\n", + "```\n", + "\n" ] }, { "cell_type": "markdown", - "id": "df867ae9-1cec-4c9e-9fef-21969b206af5", + "id": "7c1b8baa-3a80-44f0-bb79-d22f79815d3d", "metadata": {}, "source": [ - "## Manipulating outputs/inputs\n", - "Maps can be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence." + "## Using itemgetter as shorthand\n", + "\n", + "Note that you can use Python's `itemgetter` as shorthand to extract data from the map when combining with `RunnableParallel`. You can find more information about itemgetter in the [Python Documentation](https://docs.python.org/3/library/operator.html#operator.itemgetter). \n", + "\n", + "In the example below, we use itemgetter to extract specific keys from the map:" ] }, { "cell_type": "code", - "execution_count": 3, - "id": "267d1460-53c1-4fdb-b2c3-b6a1eb7fccff", + "execution_count": 6, + "id": "84fc49e1-2daf-4700-ae33-a0a6ed47d5f6", "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "'Harrison worked at Kensho.'" + "'Harrison ha lavorato a Kensho.'" ] }, - "execution_count": 3, + "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ + "from operator import itemgetter\n", + "\n", + "from langchain.chat_models import ChatOpenAI\n", "from langchain.embeddings import OpenAIEmbeddings\n", + "from langchain.prompts import ChatPromptTemplate\n", "from langchain.schema.output_parser import StrOutputParser\n", "from langchain.schema.runnable import RunnablePassthrough\n", "from langchain.vectorstores import FAISS\n", @@ -80,31 +139,72 @@ " [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n", ")\n", "retriever = vectorstore.as_retriever()\n", + "\n", "template = \"\"\"Answer the question based only on the following context:\n", "{context}\n", "\n", "Question: {question}\n", + "\n", + "Answer in the following language: {language}\n", "\"\"\"\n", "prompt = ChatPromptTemplate.from_template(template)\n", "\n", - "retrieval_chain = (\n", - " {\"context\": retriever, \"question\": RunnablePassthrough()}\n", + "chain = (\n", + " {\n", + " \"context\": itemgetter(\"question\") | retriever,\n", + " \"question\": itemgetter(\"question\"),\n", + " \"language\": itemgetter(\"language\"),\n", + " }\n", " | prompt\n", " | model\n", " | StrOutputParser()\n", ")\n", "\n", - "retrieval_chain.invoke(\"where did harrison work?\")" + "chain.invoke({\"question\": \"where did harrison work\", \"language\": \"italian\"})" ] }, { "cell_type": "markdown", - "id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1", + "id": "bc2f9847-39aa-4fe4-9049-3a8969bc4bce", "metadata": {}, "source": [ - "Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key.\n", + "## Parallelize steps\n", + "\n", + "RunnableParallel (aka. RunnableMap) makes it easy to execute multiple Runnables in parallel, and to return the output of these Runnables as a map." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "31f18442-f837-463f-bef4-8729368f5f8b", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "{'joke': AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\"),\n", + " 'poem': AIMessage(content=\"In the wild's embrace, bear roams free,\\nStrength and grace, a majestic decree.\")}" + ] + }, + "execution_count": 1, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from langchain.chat_models import ChatOpenAI\n", + "from langchain.prompts import ChatPromptTemplate\n", + "from langchain.schema.runnable import RunnableParallel\n", + "\n", + "model = ChatOpenAI()\n", + "joke_chain = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\") | model\n", + "poem_chain = (\n", + " ChatPromptTemplate.from_template(\"write a 2-line poem about {topic}\") | model\n", + ")\n", "\n", - "Note that when composing a RunnableParallel with another Runnable we don't even need to wrap our dictionary in the RunnableParallel class — the type conversion is handled for us." + "map_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)\n", + "\n", + "map_chain.invoke({\"topic\": \"bear\"})" ] }, { @@ -194,7 +294,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.11.6" } }, "nbformat": 4, diff --git a/docs/docs/expression_language/how_to/passthrough.ipynb b/docs/docs/expression_language/how_to/passthrough.ipynb new file mode 100644 index 0000000000000..4dc42d2e66c5b --- /dev/null +++ b/docs/docs/expression_language/how_to/passthrough.ipynb @@ -0,0 +1,159 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "d35de667-0352-4bfb-a890-cebe7f676fe7", + "metadata": {}, + "source": [ + "---\n", + "sidebar_position: 1\n", + "title: \"RunnablePassthrough: Passing data through\"\n", + "keywords: [RunnablePassthrough, RunnableParallel, LCEL]\n", + "---" + ] + }, + { + "cell_type": "markdown", + "id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2", + "metadata": {}, + "source": [ + "# Passing data through\n", + "\n", + "RunnablePassthrough allows to pass inputs unchanged or with the addition of extra keys. This typically is used in conjuction with RunnableParallel to assign data to a new key in the map. \n", + "\n", + "RunnablePassthrough() called on it's own, will simply take the input and pass it through. \n", + "\n", + "RunnablePassthrough called with assign (`RunnablePassthrough.assign(...)`) will take the input, and will add the extra arguments passed to the assign function. \n", + "\n", + "See the example below:" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "03988b8d-d54c-4492-8707-1594372cf093", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "{'passed': {'num': 1}, 'extra': {'num': 1, 'mult': 3}, 'modified': 2}" + ] + }, + "execution_count": 11, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from langchain.schema.runnable import RunnableParallel, RunnablePassthrough\n", + "\n", + "runnable = RunnableParallel(\n", + " passed=RunnablePassthrough(),\n", + " extra=RunnablePassthrough.assign(mult=lambda x: x[\"num\"] * 3),\n", + " modified=lambda x: x[\"num\"] + 1,\n", + ")\n", + "\n", + "runnable.invoke({\"num\": 1})" + ] + }, + { + "cell_type": "markdown", + "id": "702c7acc-cd31-4037-9489-647df192fd7c", + "metadata": {}, + "source": [ + "As seen above, `passed` key was called with `RunnablePassthrough()` and so it simply passed on `{'num': 1}`. \n", + "\n", + "In the second line, we used `RunnablePastshrough.assign` with a lambda that multiplies the numerical value by 3. In this cased, `extra` was set with `{'num': 1, 'mult': 3}` which is the original value with the `mult` key added. \n", + "\n", + "Finally, we also set a third key in the map with `modified` which uses a labmda to set a single value adding 1 to the num, which resulted in `modified` key with the value of `2`." + ] + }, + { + "cell_type": "markdown", + "id": "15187a3b-d666-4b9b-a258-672fc51fe0e2", + "metadata": {}, + "source": [ + "## Retrieval Example\n", + "\n", + "In the example below, we see a use case where we use RunnablePassthrough along with RunnableMap. " + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "267d1460-53c1-4fdb-b2c3-b6a1eb7fccff", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'Harrison worked at Kensho.'" + ] + }, + "execution_count": 17, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from langchain.chat_models import ChatOpenAI\n", + "from langchain.embeddings import OpenAIEmbeddings\n", + "from langchain.prompts import ChatPromptTemplate\n", + "from langchain.schema.output_parser import StrOutputParser\n", + "from langchain.schema.runnable import RunnablePassthrough\n", + "from langchain.vectorstores import FAISS\n", + "\n", + "vectorstore = FAISS.from_texts(\n", + " [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n", + ")\n", + "retriever = vectorstore.as_retriever()\n", + "template = \"\"\"Answer the question based only on the following context:\n", + "{context}\n", + "\n", + "Question: {question}\n", + "\"\"\"\n", + "prompt = ChatPromptTemplate.from_template(template)\n", + "model = ChatOpenAI()\n", + "\n", + "retrieval_chain = (\n", + " {\"context\": retriever, \"question\": RunnablePassthrough()}\n", + " | prompt\n", + " | model\n", + " | StrOutputParser()\n", + ")\n", + "\n", + "retrieval_chain.invoke(\"where did harrison work?\")" + ] + }, + { + "cell_type": "markdown", + "id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1", + "metadata": {}, + "source": [ + "Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key. In this case, the RunnablePassthrough allows us to pass on the user's question to the prompt and model. \n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.6" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/docs/docs/expression_language/how_to/routing.ipynb b/docs/docs/expression_language/how_to/routing.ipynb index 61f8598359b6b..e8242635d5f3b 100644 --- a/docs/docs/expression_language/how_to/routing.ipynb +++ b/docs/docs/expression_language/how_to/routing.ipynb @@ -1,5 +1,16 @@ { "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "sidebar_position: 3\n", + "title: \"RunnableBranch: Dynamically route logic based on input\"\n", + "keywords: [RunnableBranch, LCEL]\n", + "---" + ] + }, { "cell_type": "markdown", "id": "4b47436a", @@ -63,7 +74,7 @@ "chain = (\n", " PromptTemplate.from_template(\n", " \"\"\"Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.\n", - " \n", + "\n", "Do not respond with more than one word.\n", "\n", "\n", @@ -293,7 +304,7 @@ } ], "source": [ - "full_chain.invoke({\"question\": \"how do I use Anthroipc?\"})" + "full_chain.invoke({\"question\": \"how do I use Anthropic?\"})" ] }, {