Skip to main content

从 MultiPromptChain 迁移

MultiPromptChain 将输入查询路由到多个 LLMChains 之一——也就是说,给定一个输入查询,它使用 LLM 从提示列表中选择,格式化查询为提示,并生成响应。

MultiPromptChain 不支持常见的 聊天模型 特性,例如消息角色和 工具调用

一个 LangGraph 实现为这个问题带来了许多优势:

  • 支持聊天提示模板,包括带有 system 和其他角色的消息;
  • 支持在路由步骤中使用工具调用;
  • 支持单个步骤和输出标记的流式处理。

现在让我们并排查看它们。请注意,对于本指南,我们将使用 langchain-openai >= 0.1.20

%pip install -qU langchain-core langchain-openai
import os
from getpass import getpass

if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass()

传统

Details
<!--IMPORTS:[{"imported": "MultiPromptChain", "source": "langchain.chains.router.multi_prompt", "docs": "https://python.langchain.com/api_reference/langchain/chains/langchain.chains.router.multi_prompt.MultiPromptChain.html", "title": "Migrating from MultiPromptChain"}, {"imported": "ChatOpenAI", "source": "langchain_openai", "docs": "https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html", "title": "Migrating from MultiPromptChain"}]-->
from langchain.chains.router.multi_prompt import MultiPromptChain
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")

prompt_1_template = """
You are an expert on animals. Please answer the below query:

{input}
"""

prompt_2_template = """
You are an expert on vegetables. Please answer the below query:

{input}
"""

prompt_infos = [
{
"name": "animals",
"description": "prompt for an animal expert",
"prompt_template": prompt_1_template,
},
{
"name": "vegetables",
"description": "prompt for a vegetable expert",
"prompt_template": prompt_2_template,
},
]

chain = MultiPromptChain.from_prompts(llm, prompt_infos)
chain.invoke({"input": "What color are carrots?"})
{'input': 'What color are carrots?',
'text': 'Carrots are most commonly orange, but they can also be found in a variety of other colors including purple, yellow, white, and red. The orange variety is the most popular and widely recognized.'}

LangSmith trace 中,我们可以看到这个过程的两个步骤,包括路由查询的提示词和最终选择的提示词。

LangGraph

Details
pip install -qU langgraph
<!--IMPORTS:[{"imported": "StrOutputParser", "source": "langchain_core.output_parsers", "docs": "https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html", "title": "Migrating from MultiPromptChain"}, {"imported": "ChatPromptTemplate", "source": "langchain_core.prompts", "docs": "https://python.langchain.com/api_reference/core/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html", "title": "Migrating from MultiPromptChain"}, {"imported": "RunnableConfig", "source": "langchain_core.runnables", "docs": "https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.config.RunnableConfig.html", "title": "Migrating from MultiPromptChain"}, {"imported": "ChatOpenAI", "source": "langchain_openai", "docs": "https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html", "title": "Migrating from MultiPromptChain"}]-->
from operator import itemgetter
from typing import Literal

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, StateGraph
from typing_extensions import TypedDict

llm = ChatOpenAI(model="gpt-4o-mini")

# Define the prompts we will route to
prompt_1 = ChatPromptTemplate.from_messages(
[
("system", "You are an expert on animals."),
("human", "{input}"),
]
)
prompt_2 = ChatPromptTemplate.from_messages(
[
("system", "You are an expert on vegetables."),
("human", "{input}"),
]
)

# Construct the chains we will route to. These format the input query
# into the respective prompt, run it through a chat model, and cast
# the result to a string.
chain_1 = prompt_1 | llm | StrOutputParser()
chain_2 = prompt_2 | llm | StrOutputParser()


# Next: define the chain that selects which branch to route to.
# Here we will take advantage of tool-calling features to force
# the output to select one of two desired branches.
route_system = "Route the user's query to either the animal or vegetable expert."
route_prompt = ChatPromptTemplate.from_messages(
[
("system", route_system),
("human", "{input}"),
]
)


# Define schema for output:
class RouteQuery(TypedDict):
"""Route query to destination expert."""

destination: Literal["animal", "vegetable"]


route_chain = route_prompt | llm.with_structured_output(RouteQuery)


# For LangGraph, we will define the state of the graph to hold the query,
# destination, and final answer.
class State(TypedDict):
query: str
destination: RouteQuery
answer: str


# We define functions for each node, including routing the query:
async def route_query(state: State, config: RunnableConfig):
destination = await route_chain.ainvoke(state["query"], config)
return {"destination": destination}


# And one node for each prompt
async def prompt_1(state: State, config: RunnableConfig):
return {"answer": await chain_1.ainvoke(state["query"], config)}


async def prompt_2(state: State, config: RunnableConfig):
return {"answer": await chain_2.ainvoke(state["query"], config)}


# We then define logic that selects the prompt based on the classification
def select_node(state: State) -> Literal["prompt_1", "prompt_2"]:
if state["destination"] == "animal":
return "prompt_1"
else:
return "prompt_2"


# Finally, assemble the multi-prompt chain. This is a sequence of two steps:
# 1) Select "animal" or "vegetable" via the route_chain, and collect the answer
# alongside the input query.
# 2) Route the input query to chain_1 or chain_2, based on the
# selection.
graph = StateGraph(State)
graph.add_node("route_query", route_query)
graph.add_node("prompt_1", prompt_1)
graph.add_node("prompt_2", prompt_2)

graph.add_edge(START, "route_query")
graph.add_conditional_edges("route_query", select_node)
graph.add_edge("prompt_1", END)
graph.add_edge("prompt_2", END)
app = graph.compile()
from IPython.display import Image

Image(app.get_graph().draw_mermaid_png())

我们可以如下调用链:

state = await app.ainvoke({"query": "what color are carrots"})
print(state["destination"])
print(state["answer"])
{'destination': 'vegetable'}
Carrots are most commonly orange, but they can also come in a variety of other colors, including purple, red, yellow, and white. The different colors often indicate varying flavors and nutritional profiles. For example, purple carrots contain anthocyanins, while orange carrots are rich in beta-carotene, which is converted to vitamin A in the body.

LangSmith trace 中,我们可以看到路由查询的工具调用和用于生成答案的选择提示词。

概述:

  • 在底层,MultiPromptChain 通过指示大型语言模型生成 JSON 格式的文本来路由查询,并解析出预期的目标。它以字符串提示词模板的注册表作为输入。
  • 上述的 LangGraph 实现通过低级原语实现,使用工具调用来路由到任意链。在这个例子中,链包括聊天模型模板和聊天模型。

下一步

请参见 这个教程 以获取有关使用提示词模板、大型语言模型和输出解析器的更多详细信息。

查看LangGraph文档以获取有关使用LangGraph构建的详细信息。


Was this page helpful?


You can also leave detailed feedback on GitHub.

扫我,入群扫我,找书