6大核心模块(Modules)
示例(Examples)
对话定制(Conversational Customization)

LangChain

如何自定义对话记忆#

本教程演示了几种自定义对话记忆的方法。

from langchain.llms import OpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
 
llm = OpenAI(temperature=0)
 

AI前缀 ai-prefix#

第一种方法是通过更改对话摘要中的AI前缀来实现。默认情况下,这个前缀设置为“AI”,但您可以将其设置为任何您想要的内容。请注意,如果您更改了这个前缀,您还应该更改用于链中的提示以反映这个命名更改。让我们在下面的示例中举个例子。

# Here it is by default set to "AI"
conversation = ConversationChain(
    llm=llm, 
    verbose=True, 
    memory=ConversationBufferMemory()
)
 
conversation.predict(input="Hi there!")
 
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
 
Current conversation:
 
Human: Hi there!
AI:
 
> Finished ConversationChain chain.
 
" Hi there! It's nice to meet you. How can I help you today?"
 
conversation.predict(input="What's the weather?")
 
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
 
Current conversation:
 
Human: Hi there!
AI: Hi there! It's nice to meet you. How can I help you today?
Human: What's the weather?
AI:
 
> Finished ConversationChain chain.
 
' The current weather is sunny and warm with a temperature of 75 degrees Fahrenheit. The forecast for the next few days is sunny with temperatures in the mid-70s.'
 
# Now we can override it and set it to "AI Assistant"
from langchain.prompts.prompt import PromptTemplate
 
template = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
 
Current conversation:
{history}
Human: {input}
AI Assistant:"""
PROMPT = PromptTemplate(
    input_variables=["history", "input"], template=template
)
conversation = ConversationChain(
    prompt=PROMPT,
    llm=llm, 
    verbose=True, 
    memory=ConversationBufferMemory(ai_prefix="AI Assistant")
)
 
conversation.predict(input="Hi there!")
 
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
 
Current conversation:
 
Human: Hi there!
AI Assistant:
 
> Finished ConversationChain chain.
 
" Hi there! It's nice to meet you. How can I help you today?"
 
conversation.predict(input="What's the weather?")
 
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
 
Current conversation:
 
Human: Hi there!
AI Assistant: Hi there! It's nice to meet you. How can I help you today?
Human: What's the weather?
AI Assistant:
 
> Finished ConversationChain chain.
 
' The current weather is sunny and warm with a temperature of 75 degrees Fahrenheit. The forecast for the rest of the day is sunny with a high of 78 degrees and a low of 65 degrees.'
 

人类前缀 human-prefix #

下一种方法是通过更改对话摘要中的人类前缀来实现。

默认情况下,这个前缀设置为“Human”,但您可以将其设置为任何您想要的内容。

请注意,如果您更改了这个前缀,您还应该更改用于链中的提示以反映这个命名更改。让我们在下面的示例中举个例子。

# Now we can override it and set it to "Friend"
from langchain.prompts.prompt import PromptTemplate
 
template = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
 
Current conversation:
{history}
Friend: {input}
AI:"""
PROMPT = PromptTemplate(
    input_variables=["history", "input"], template=template
)
conversation = ConversationChain(
    prompt=PROMPT,
    llm=llm, 
    verbose=True, 
    memory=ConversationBufferMemory(human_prefix="Friend")
)
 
conversation.predict(input="Hi there!")
 
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
 
Current conversation:
 
Friend: Hi there!
AI:
 
> Finished ConversationChain chain.
 
" Hi there! It's nice to meet you. How can I help you today?"
 
conversation.predict(input="What's the weather?")
 
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
 
Current conversation:
 
Friend: Hi there!
AI: Hi there! It's nice to meet you. How can I help you today?
Friend: What's the weather?
AI:
 
> Finished ConversationChain chain.
 
' The weather right now is sunny and warm with a temperature of 75 degrees Fahrenheit. The forecast for the rest of the day is mostly sunny with a high of 82 degrees.'