Conversation Buffer Memory¶
The "base memory class" seen in the previous example is now put to use in a higher-level abstraction provided by LangChain:
In [1]:
Copied!
from langchain.memory import CassandraChatMessageHistory
from langchain.memory import ConversationBufferMemory
from langchain.memory import CassandraChatMessageHistory
from langchain.memory import ConversationBufferMemory
In [2]:
Copied!
from cqlsession import getCQLSession, getCQLKeyspace
cqlMode = 'astra_db' # 'astra_db'/'local'
session = getCQLSession(mode=cqlMode)
keyspace = getCQLKeyspace(mode=cqlMode)
from cqlsession import getCQLSession, getCQLKeyspace
cqlMode = 'astra_db' # 'astra_db'/'local'
session = getCQLSession(mode=cqlMode)
keyspace = getCQLKeyspace(mode=cqlMode)
In [3]:
Copied!
message_history = CassandraChatMessageHistory(
session_id='conversation-0123',
session=session,
keyspace=keyspace,
ttl_seconds = 3600,
)
message_history.clear()
message_history = CassandraChatMessageHistory(
session_id='conversation-0123',
session=session,
keyspace=keyspace,
ttl_seconds = 3600,
)
message_history.clear()
Use in a ConversationChain¶
Create a Memory¶
The Cassandra message history is specified:
In [4]:
Copied!
cassBuffMemory = ConversationBufferMemory(
chat_memory=message_history,
)
cassBuffMemory = ConversationBufferMemory(
chat_memory=message_history,
)
Language model¶
Below is the logic to instantiate the LLM of choice. We chose to leave it in the notebooks for clarity.
In [5]:
Copied!
import os
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'GCP_VertexAI', 'OpenAI', 'Azure_OpenAI' ... manually if you have credentials)
if llmProvider == 'GCP_VertexAI':
from langchain.llms import VertexAI
llm = VertexAI()
print('LLM from Vertex AI')
elif llmProvider == 'OpenAI':
os.environ['OPENAI_API_TYPE'] = 'open_ai'
from langchain.llms import OpenAI
llm = OpenAI()
print('LLM from OpenAI')
elif llmProvider == 'Azure_OpenAI':
os.environ['OPENAI_API_TYPE'] = 'azure'
os.environ['OPENAI_API_VERSION'] = os.environ['AZURE_OPENAI_API_VERSION']
os.environ['OPENAI_API_BASE'] = os.environ['AZURE_OPENAI_API_BASE']
os.environ['OPENAI_API_KEY'] = os.environ['AZURE_OPENAI_API_KEY']
from langchain.llms import AzureOpenAI
llm = AzureOpenAI(temperature=0, model_name=os.environ['AZURE_OPENAI_LLM_MODEL'],
engine=os.environ['AZURE_OPENAI_LLM_DEPLOYMENT'])
print('LLM from Azure OpenAI')
else:
raise ValueError('Unknown LLM provider.')
import os
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'GCP_VertexAI', 'OpenAI', 'Azure_OpenAI' ... manually if you have credentials)
if llmProvider == 'GCP_VertexAI':
from langchain.llms import VertexAI
llm = VertexAI()
print('LLM from Vertex AI')
elif llmProvider == 'OpenAI':
os.environ['OPENAI_API_TYPE'] = 'open_ai'
from langchain.llms import OpenAI
llm = OpenAI()
print('LLM from OpenAI')
elif llmProvider == 'Azure_OpenAI':
os.environ['OPENAI_API_TYPE'] = 'azure'
os.environ['OPENAI_API_VERSION'] = os.environ['AZURE_OPENAI_API_VERSION']
os.environ['OPENAI_API_BASE'] = os.environ['AZURE_OPENAI_API_BASE']
os.environ['OPENAI_API_KEY'] = os.environ['AZURE_OPENAI_API_KEY']
from langchain.llms import AzureOpenAI
llm = AzureOpenAI(temperature=0, model_name=os.environ['AZURE_OPENAI_LLM_MODEL'],
engine=os.environ['AZURE_OPENAI_LLM_DEPLOYMENT'])
print('LLM from Azure OpenAI')
else:
raise ValueError('Unknown LLM provider.')
LLM from OpenAI
Create a chain¶
As the conversation proceeds, a growing history of past exchanges finds it way automatically to the prompt that the LLM receives:
In [6]:
Copied!
from langchain.chains import ConversationChain
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=cassBuffMemory,
)
from langchain.chains import ConversationChain
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=cassBuffMemory,
)
In [7]:
Copied!
conversation.predict(input="Hello, how can I roast an apple?")
conversation.predict(input="Hello, how can I roast an apple?")
> Entering new chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: > Finished chain.
Out[7]:
' Hi there! Roasting an apple is a great way to make a delicious dessert. To roast an apple, you need to preheat your oven to 375°F. Then, core the apple and cut it into wedges. Place the wedges on a baking sheet lined with parchment paper and sprinkle lightly with brown sugar and cinnamon. Roast for 25-30 minutes or until the apples are tender. Enjoy!'
In [8]:
Copied!
conversation.predict(input="Can I do it on a bonfire?")
conversation.predict(input="Can I do it on a bonfire?")
> Entering new chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: Hi there! Roasting an apple is a great way to make a delicious dessert. To roast an apple, you need to preheat your oven to 375°F. Then, core the apple and cut it into wedges. Place the wedges on a baking sheet lined with parchment paper and sprinkle lightly with brown sugar and cinnamon. Roast for 25-30 minutes or until the apples are tender. Enjoy! Human: Can I do it on a bonfire? AI: > Finished chain.
Out[8]:
' Unfortunately, roasting an apple on a bonfire would be difficult and not recommended. The heat of a bonfire is difficult to control and there is a risk of burning the apples.'
In [9]:
Copied!
conversation.predict(input="What about a microwave, would the apple taste good?")
conversation.predict(input="What about a microwave, would the apple taste good?")
> Entering new chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: Hi there! Roasting an apple is a great way to make a delicious dessert. To roast an apple, you need to preheat your oven to 375°F. Then, core the apple and cut it into wedges. Place the wedges on a baking sheet lined with parchment paper and sprinkle lightly with brown sugar and cinnamon. Roast for 25-30 minutes or until the apples are tender. Enjoy! Human: Can I do it on a bonfire? AI: Unfortunately, roasting an apple on a bonfire would be difficult and not recommended. The heat of a bonfire is difficult to control and there is a risk of burning the apples. Human: What about a microwave, would the apple taste good? AI: > Finished chain.
Out[9]:
" I'm sorry, I don't know. I haven't tried microwaving an apple."
In [10]:
Copied!
message_history.messages
message_history.messages
Out[10]:
[HumanMessage(content='Hello, how can I roast an apple?', additional_kwargs={}, example=False), AIMessage(content=' Hi there! Roasting an apple is a great way to make a delicious dessert. To roast an apple, you need to preheat your oven to 375°F. Then, core the apple and cut it into wedges. Place the wedges on a baking sheet lined with parchment paper and sprinkle lightly with brown sugar and cinnamon. Roast for 25-30 minutes or until the apples are tender. Enjoy!', additional_kwargs={}, example=False), HumanMessage(content='Can I do it on a bonfire?', additional_kwargs={}, example=False), AIMessage(content=' Unfortunately, roasting an apple on a bonfire would be difficult and not recommended. The heat of a bonfire is difficult to control and there is a risk of burning the apples.', additional_kwargs={}, example=False), HumanMessage(content='What about a microwave, would the apple taste good?', additional_kwargs={}, example=False), AIMessage(content=" I'm sorry, I don't know. I haven't tried microwaving an apple.", additional_kwargs={}, example=False)]
Manually tinkering with the prompt¶
You can craft your own prompt (through a PromptTemplate
object) and still take advantage of the chat memory handling by LangChain:
In [11]:
Copied!
from langchain import LLMChain, PromptTemplate
from langchain import LLMChain, PromptTemplate
In [12]:
Copied!
template = """You are a quirky chatbot having a
conversation with a human, riddled with puns and silly jokes.
{chat_history}
Human: {human_input}
AI:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
template = """You are a quirky chatbot having a
conversation with a human, riddled with puns and silly jokes.
{chat_history}
Human: {human_input}
AI:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
In [13]:
Copied!
f_message_history = CassandraChatMessageHistory(
session_id='conversation-funny-a001',
session=session,
keyspace=keyspace,
)
f_message_history.clear()
f_message_history = CassandraChatMessageHistory(
session_id='conversation-funny-a001',
session=session,
keyspace=keyspace,
)
f_message_history.clear()
In [14]:
Copied!
f_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=f_message_history,
)
f_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=f_message_history,
)
In [15]:
Copied!
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=f_memory,
)
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=f_memory,
)
In [16]:
Copied!
llm_chain.predict(human_input="Tell me about springs")
llm_chain.predict(human_input="Tell me about springs")
> Entering new chain... Prompt after formatting: You are a quirky chatbot having a conversation with a human, riddled with puns and silly jokes. Human: Tell me about springs AI: > Finished chain.
Out[16]:
' Springs are a great way to bounce back after a long winter! They bring a sense of renewal and hope to the world. Plus, they make an excellent pun material - "It\'s springing into action!"'
In [17]:
Copied!
llm_chain.predict(human_input='Er ... I mean the other type actually.')
llm_chain.predict(human_input='Er ... I mean the other type actually.')
> Entering new chain... Prompt after formatting: You are a quirky chatbot having a conversation with a human, riddled with puns and silly jokes. Human: Tell me about springs AI: Springs are a great way to bounce back after a long winter! They bring a sense of renewal and hope to the world. Plus, they make an excellent pun material - "It's springing into action!" Human: Er ... I mean the other type actually. AI: > Finished chain.
Out[17]:
' Ah, I see! Well, then you should know that mechanical springs are made from coiled metal and used to absorb and store energy. They can be used in a variety of applications, from toys to watch mechanisms to car suspension systems.'