Skip to main content

从 MapRerankDocumentsChain 迁移

MapRerankDocumentsChain 实现了一种分析长文本的策略。该策略如下:

  • 将文本拆分为较小的文档;
  • 将一个过程映射到文档集,其中该过程包括生成一个分数;
  • 按分数对结果进行排名并返回最大值。

在这种情况下,一个常见的过程是使用文档中的上下文片段进行问答。强制模型在生成答案的同时生成分数,有助于选择仅由相关上下文生成的答案。

一个 LangGraph 实现允许将 工具调用 和其他功能纳入此问题。下面我们将通过一个简单的示例来介绍 MapRerankDocumentsChain 和相应的 LangGraph 实现。

示例

让我们通过一个示例来分析一组文档。我们将使用以下 3 个文档:

<!--IMPORTS:[{"imported": "Document", "source": "langchain_core.documents", "docs": "https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html", "title": "Migrating from MapRerankDocumentsChain"}]-->
from langchain_core.documents import Document

documents = [
Document(page_content="Alice has blue eyes", metadata={"title": "book_chapter_2"}),
Document(page_content="Bob has brown eyes", metadata={"title": "book_chapter_1"}),
Document(
page_content="Charlie has green eyes", metadata={"title": "book_chapter_3"}
),
]

旧版

Details

下面我们展示了使用 MapRerankDocumentsChain 的实现。我们为问答任务定义了提示词模板,并为此实例化了一个 LLMChain 对象。我们定义了文档如何格式化为提示,并确保各种提示中的键保持一致。

<!--IMPORTS:[{"imported": "LLMChain", "source": "langchain.chains", "docs": "https://python.langchain.com/api_reference/langchain/chains/langchain.chains.llm.LLMChain.html", "title": "Migrating from MapRerankDocumentsChain"}, {"imported": "MapRerankDocumentsChain", "source": "langchain.chains", "docs": "https://python.langchain.com/api_reference/langchain/chains/langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html", "title": "Migrating from MapRerankDocumentsChain"}, {"imported": "RegexParser", "source": "langchain.output_parsers.regex", "docs": "https://python.langchain.com/api_reference/langchain/output_parsers/langchain.output_parsers.regex.RegexParser.html", "title": "Migrating from MapRerankDocumentsChain"}, {"imported": "PromptTemplate", "source": "langchain_core.prompts", "docs": "https://python.langchain.com/api_reference/core/prompts/langchain_core.prompts.prompt.PromptTemplate.html", "title": "Migrating from MapRerankDocumentsChain"}, {"imported": "OpenAI", "source": "langchain_openai", "docs": "https://python.langchain.com/api_reference/openai/llms/langchain_openai.llms.base.OpenAI.html", "title": "Migrating from MapRerankDocumentsChain"}]-->
from langchain.chains import LLMChain, MapRerankDocumentsChain
from langchain.output_parsers.regex import RegexParser
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI

document_variable_name = "context"
llm = OpenAI()
# The prompt here should take as an input variable the
# `document_variable_name`
# The actual prompt will need to be a lot more complex, this is just
# an example.
prompt_template = (
"What color are Bob's eyes? "
"Output both your answer and a score (1-10) of how confident "
"you are in the format: <Answer>\nScore: <Score>.\n\n"
"Provide no other commentary.\n\n"
"Context: {context}"
)
output_parser = RegexParser(
regex=r"(.*?)\nScore: (.*)",
output_keys=["answer", "score"],
)
prompt = PromptTemplate(
template=prompt_template,
input_variables=["context"],
output_parser=output_parser,
)
llm_chain = LLMChain(llm=llm, prompt=prompt)
chain = MapRerankDocumentsChain(
llm_chain=llm_chain,
document_variable_name=document_variable_name,
rank_key="score",
answer_key="answer",
)
response = chain.invoke(documents)
response["output_text"]
/langchain/libs/langchain/langchain/chains/llm.py:369: UserWarning: The apply_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
'Brown'

检查上述运行的 LangSmith trace,我们可以看到三个 LLM 调用——每个文档一个——并且评分机制减轻了幻觉的影响。

LangGraph

Details

下面我们展示了这个过程的 LangGraph 实现。请注意,我们的模板是简化的,因为我们通过 .with_structured_output 方法将格式化指令委托给聊天模型的工具调用功能。

在这里,我们遵循基本的 map-reduce 工作流以并行执行 LLM 调用。

我们需要安装 langgraph

pip install -qU langgraph
<!--IMPORTS:[{"imported": "ChatPromptTemplate", "source": "langchain_core.prompts", "docs": "https://python.langchain.com/api_reference/core/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html", "title": "Migrating from MapRerankDocumentsChain"}, {"imported": "ChatOpenAI", "source": "langchain_openai", "docs": "https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html", "title": "Migrating from MapRerankDocumentsChain"}]-->
import operator
from typing import Annotated, List, TypedDict

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langgraph.constants import Send
from langgraph.graph import END, START, StateGraph


class AnswerWithScore(TypedDict):
answer: str
score: Annotated[int, ..., "Score from 1-10."]


llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

prompt_template = "What color are Bob's eyes?\n\n" "Context: {context}"
prompt = ChatPromptTemplate.from_template(prompt_template)

# The below chain formats context from a document into a prompt, then
# generates a response structured according to the AnswerWithScore schema.
map_chain = prompt | llm.with_structured_output(AnswerWithScore)

# Below we define the components that will make up the graph


# This will be the overall state of the graph.
# It will contain the input document contents, corresponding
# answers with scores, and a final answer.
class State(TypedDict):
contents: List[str]
answers_with_scores: Annotated[list, operator.add]
answer: str


# This will be the state of the node that we will "map" all
# documents to in order to generate answers with scores
class MapState(TypedDict):
content: str


# Here we define the logic to map out over the documents
# We will use this an edge in the graph
def map_analyses(state: State):
# We will return a list of `Send` objects
# Each `Send` object consists of the name of a node in the graph
# as well as the state to send to that node
return [
Send("generate_analysis", {"content": content}) for content in state["contents"]
]


# Here we generate an answer with score, given a document
async def generate_analysis(state: MapState):
response = await map_chain.ainvoke(state["content"])
return {"answers_with_scores": [response]}


# Here we will select the top answer
def pick_top_ranked(state: State):
ranked_answers = sorted(
state["answers_with_scores"], key=lambda x: -int(x["score"])
)
return {"answer": ranked_answers[0]}


# Construct the graph: here we put everything together to construct our graph
graph = StateGraph(State)
graph.add_node("generate_analysis", generate_analysis)
graph.add_node("pick_top_ranked", pick_top_ranked)
graph.add_conditional_edges(START, map_analyses, ["generate_analysis"])
graph.add_edge("generate_analysis", "pick_top_ranked")
graph.add_edge("pick_top_ranked", END)
app = graph.compile()
from IPython.display import Image

Image(app.get_graph().draw_mermaid_png())

result = await app.ainvoke({"contents": [doc.page_content for doc in documents]})
result["answer"]
{'answer': 'Bob has brown eyes.', 'score': 10}

检查上述运行的 LangSmith trace,我们可以看到三个 LLM 调用,如前所述。使用模型的工具调用功能还使我们能够省去解析步骤。

下一步

请参阅这些 使用指南,了解更多关于使用 RAG 的问答任务。

查看LangGraph文档以获取有关使用LangGraph构建的详细信息,包括关于LangGraph中map-reduce的详细信息的本指南


Was this page helpful?


You can also leave detailed feedback on GitHub.

扫我,入群扫我,找书