6大核心模块(Modules)
示例(Examples)
结构化(Structured)

LangChain

结构 StructuredOutputParser

虽然Pydantic / JSON解析器更加强大,但我们最初尝试的数据结构仅具有文本字段。

from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
 

这里我们定义了我们想要接收的响应模式。

response_schemas = [
    ResponseSchema(name="answer", description="answer to the user's question"),
    ResponseSchema(name="source", description="source used to answer the user's question, should be a website.")
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
 

我们现在得到了一个字符串,其中包含响应应如何格式化的指令,然后我们将其插入到我们的提示中。

format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(
    template="answer the users question as best as possible.\n{format_instructions}\n{question}",
    input_variables=["question"],
    partial_variables={"format_instructions": format_instructions}
)
 

我们现在可以使用它来格式化一个提示,以发送给语言模型,然后解析返回的结果。

model = OpenAI(temperature=0)
 
_input = prompt.format_prompt(question="what's the capital of france")
output = model(_input.to_string())
 
output_parser.parse(output)
 
{'answer': 'Paris', 'source': 'https://en.wikipedia.org/wiki/Paris'}
 

下面是使用该方法在聊天模型中的示例:

chat_model = ChatOpenAI(temperature=0)
 
prompt = ChatPromptTemplate(
    messages=[
        HumanMessagePromptTemplate.from_template("answer the users question as best as possible.\n{format_instructions}\n{question}")  
    ],
    input_variables=["question"],
    partial_variables={"format_instructions": format_instructions}
)
 
_input = prompt.format_prompt(question="what's the capital of france")
output = chat_model(_input.to_messages())
 
output_parser.parse(output.content)
 
{'answer': 'Paris', 'source': 'https://en.wikipedia.org/wiki/Paris'}