用例(User Case)
基于向量数据库的智能体最新成果(Agent Vectordb Sota Pg)

LangChain

代理VectorDB问答基准测试 #

代理VectorDB问答基准测试 Agent VectorDB Question Answering Benchmarking

在这里,我们将讨论如何使用代理在多个向量数据库之间进行路由来对问答任务的性能进行基准测试

强烈建议您在启用跟踪的情况下进行任何评估/基准测试。请参阅此处here (opens in a new tab) 了解什么是跟踪以及如何设置它。

# Comment this out if you are NOT using tracing
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
 

加载数据 Loading the data #

首先,让我们加载数据

from langchain.evaluation.loading import load_dataset
dataset = load_dataset("agent-vectordb-qa-sota-pg")
 
Found cached dataset json (/Users/qt/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--agent-vectordb-qa-sota-pg-d3ae24016b514f92/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b2dd7af1cf934bed8e233e6e)
100%|██████████| 1/1 [00:00<00:00, 414.42it/s]
 
dataset[0]
 
{'question': 'What is the purpose of the NATO Alliance?',
 'answer': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.',
 'steps': [{'tool': 'State of Union QA System', 'tool_input': None},
  {'tool': None, 'tool_input': 'What is the purpose of the NATO Alliance?'}]}
 
dataset[-1]
 
{'question': 'What is the purpose of YC?',
 'answer': 'The purpose of YC is to cause startups to be founded that would not otherwise have existed.',
 'steps': [{'tool': 'Paul Graham QA System', 'tool_input': None},
  {'tool': None, 'tool_input': 'What is the purpose of YC?'}]}
 

设置链 Setting up a chain #

现在我们需要创建一些管道来进行问题回答。第一步是在有问题的数据上创建索引。

from langchain.document_loaders import TextLoader
loader = TextLoader("../../modules/state_of_the_union.txt")
 
from langchain.indexes import VectorstoreIndexCreator
 
vectorstore_sota = VectorstoreIndexCreator(vectorstore_kwargs={"collection_name":"sota"}).from_loaders([loader]).vectorstore
 
Using embedded DuckDB without persistence: data will be transient
 

现在我们可以创建一个问题回答链。

from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
 
chain_sota = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0), chain_type="stuff", retriever=vectorstore_sota.as_retriever(), input_key="question")
 

现在我们对Paul Graham的数据做同样的事情。

loader = TextLoader("../../modules/paul_graham_essay.txt")
 
vectorstore_pg = VectorstoreIndexCreator(vectorstore_kwargs={"collection_name":"paul_graham"}).from_loaders([loader]).vectorstore
 
Using embedded DuckDB without persistence: data will be transient
 
chain_pg = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0), chain_type="stuff", retriever=vectorstore_pg.as_retriever(), input_key="question")
 

我们现在可以设置一个代理在它们之间路由。

from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
tools = [
    Tool(
        name = "State of Union QA System",
        func=chain_sota.run,
        description="useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question."
    ),
    Tool(
        name = "Paul Graham System",
        func=chain_pg.run,
        description="useful for when you need to answer questions about Paul Graham. Input should be a fully formed question."
    ),
]
 
agent = initialize_agent(tools, OpenAI(temperature=0), agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, max_iterations=4)
 

预测 Make a prediction #

首先,我们可以一次预测一个数据点。在这种粒度级别上执行此操作允许use详细地探索输出,而且比在多个数据点上运行要便宜得多

agent.run(dataset[0]['question'])
 
'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.'
 

做很多预测 Make many predictions #

现在我们可以做出预测

predictions = []
predicted_dataset = []
error_dataset = []
for data in dataset:
    new_data = {"input": data["question"], "answer": data["answer"]}
    try:
        predictions.append(agent(new_data))
        predicted_dataset.append(new_data)
    except Exception:
        error_dataset.append(new_data)
 

评估性能 Evaluate performance #

现在我们可以评估预测。我们能做的第一件事就是用眼睛看它们。

predictions[0]
 
{'input': 'What is the purpose of the NATO Alliance?',
 'answer': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.',
 'output': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.'}
 

接下来,我们可以使用一个语言模型,以编程的方式给它们打分

from langchain.evaluation.qa import QAEvalChain
 
llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(predicted_dataset, predictions, question_key="input", prediction_key="output")
 

我们可以将分级输出添加到 predictions dict中,然后获得等级计数。

for i, prediction in enumerate(predictions):
    prediction['grade'] = graded_outputs[i]['text']
 
from collections import Counter
Counter([pred['grade'] for pred in predictions])
 
Counter({' CORRECT': 28, ' INCORRECT': 5})
 

我们还可以过滤数据点,找出不正确的例子并查看它们。

incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"]
 
incorrect[0]
 
{'input': 'What are the four common sense steps that the author suggests to move forward safely?',
 'answer': 'The four common sense steps suggested by the author to move forward safely are: stay protected with vaccines and treatments, prepare for new variants, end the shutdown of schools and businesses, and stay vigilant.',
 'output': 'The four common sense steps suggested in the most recent State of the Union address are: cutting the cost of prescription drugs, providing a pathway to citizenship for Dreamers, revising laws so businesses have the workers they need and families don’t wait decades to reunite, and protecting access to health care and preserving a woman’s right to choose.',
 'grade': ' INCORRECT'}