Semantic LLM caching¶
NOTE: this uses Cassandra's "Vector Search" capability. Make sure you are connecting to a vector-enabled database for this demo.
The Cassandra-backed "semantic cache" for prompt responses is imported like this:
from langchain.cache import CassandraSemanticCache
As usual, a database connection is needed to access Cassandra. The following assumes that a vector-search-capable Astra DB instance is available. Adjust as needed.
from cqlsession import getCQLSession, getCQLKeyspace
cqlMode = 'astra_db' # 'astra_db'/'local'
session = getCQLSession(mode=cqlMode)
keyspace = getCQLKeyspace(mode=cqlMode)
An embedding function and an LLM are needed.
Below is the logic to instantiate the LLM and embeddings of choice. We chose to leave it in the notebooks for clarity.
import os
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'GCP_VertexAI', 'OpenAI', 'Azure_OpenAI' ... manually if you have credentials)
if llmProvider == 'GCP_VertexAI':
from langchain.llms import VertexAI
from langchain.embeddings import VertexAIEmbeddings
llm = VertexAI()
myEmbedding = VertexAIEmbeddings()
print('LLM+embeddings from Vertex AI')
elif llmProvider == 'OpenAI':
os.environ['OPENAI_API_TYPE'] = 'open_ai'
from langchain.llms import OpenAI
from langchain.embeddings import OpenAIEmbeddings
llm = OpenAI(temperature=0)
myEmbedding = OpenAIEmbeddings()
print('LLM+embeddings from OpenAI')
elif llmProvider == 'Azure_OpenAI':
os.environ['OPENAI_API_TYPE'] = 'azure'
os.environ['OPENAI_API_VERSION'] = os.environ['AZURE_OPENAI_API_VERSION']
os.environ['OPENAI_API_BASE'] = os.environ['AZURE_OPENAI_API_BASE']
os.environ['OPENAI_API_KEY'] = os.environ['AZURE_OPENAI_API_KEY']
from langchain.llms import AzureOpenAI
from langchain.embeddings import OpenAIEmbeddings
llm = AzureOpenAI(temperature=0, model_name=os.environ['AZURE_OPENAI_LLM_MODEL'],
engine=os.environ['AZURE_OPENAI_LLM_DEPLOYMENT'])
myEmbedding = OpenAIEmbeddings(model=os.environ['AZURE_OPENAI_EMBEDDINGS_MODEL'],
deployment=os.environ['AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT'])
print('LLM+embeddings from Azure OpenAI')
else:
raise ValueError('Unknown LLM provider.')
LLM+embeddings from OpenAI
Create the cache¶
At this point you can instantiate the semantic cache.
Note: in the following it is made clear, through the table_name_prefix
parameter, that different embeddings will require separate tables. This is done here to avoid mismatches when running this demo over and over with varying embedding functions: in most applications, where a single embedding suffices, there's no need to be this finicky.
cassSemanticCache = CassandraSemanticCache(
session=session,
keyspace=keyspace,
embedding=myEmbedding,
table_name_prefix=f'semantic_cache_{llmProvider}_',
)
Make sure the cache starts empty with:
cassSemanticCache.clear_through_llm(llm=llm)
Configure the cache at a LangChain global level:
import langchain
langchain.llm_cache = cassSemanticCache
Use the cache¶
Now try submitting a few prompts to the LLM and pay attention to the response times.
If the LLM is actually run, they should be the order of a few seconds; but in case of a cache hit, it will be way less than a second.
Notice that you get a cache hit even after rephrasing the question.
%%time
SPIDER_QUESTION_FORM_1 = "How many eyes do spiders have?"
# A new question should take long
llm(SPIDER_QUESTION_FORM_1)
CPU times: user 39.4 ms, sys: 1.25 ms, total: 40.7 ms Wall time: 8.98 s
'\n\nMost spiders have eight eyes, although some have fewer or more.'
%%time
# Second time, very same question, this should be quick
llm(SPIDER_QUESTION_FORM_1)
CPU times: user 11.1 ms, sys: 0 ns, total: 11.1 ms Wall time: 260 ms
'\n\nMost spiders have eight eyes, although some have fewer or more.'
%%time
SPIDER_QUESTION_FORM_2 = "How many eyes does a spider generally have?"
# Just a rephrasing: but it's the same question,
# so it will just take the time to evaluate embeddings
llm(SPIDER_QUESTION_FORM_2)
CPU times: user 25.8 ms, sys: 2.69 ms, total: 28.5 ms Wall time: 4.13 s
'\n\nMost spiders have eight eyes, although some have fewer or more.'
Time for a really new question:
%%time
LOGIC_QUESTION_FORM_1 = "Is absence of proof the same as proof of absence?"
# A totally new question
llm(LOGIC_QUESTION_FORM_1)
CPU times: user 31.4 ms, sys: 3.55 ms, total: 34.9 ms Wall time: 11.5 s
'\n\nNo, absence of proof is not the same as proof of absence. Absence of proof means that there is no evidence to support a claim, while proof of absence means that there is evidence to support the claim that something does not exist.'
%%time
SPIDER_QUESTION_FORM_3 = "How many eyes are on the head of a typical spider?"
# Trying to catch the cache off-guard :)
llm(SPIDER_QUESTION_FORM_3)
CPU times: user 18.4 ms, sys: 0 ns, total: 18.4 ms Wall time: 470 ms
'\n\nMost spiders have eight eyes, although some have fewer or more.'
%%time
LOGIC_QUESTION_FORM_2 = "Is it true that the absence of a proof equates the proof of an absence?"
# Switching to the other question again
llm(LOGIC_QUESTION_FORM_2)
CPU times: user 30.7 ms, sys: 0 ns, total: 30.7 ms Wall time: 5.74 s
'\n\nNo, absence of proof is not the same as proof of absence. Absence of proof means that there is no evidence to support a claim, while proof of absence means that there is evidence to support the claim that something does not exist.'
Additional options¶
When creating the semantic cache, you can specify a few other options such as the metric used to calculate the similarity and the number of entries to retrieve in the ANN step (i.e. those on which the exact requested metric is computed for the final filtering). Here is an example which uses the L2 metric:
anotherCassSemanticCache = CassandraSemanticCache(
session=session,
keyspace=keyspace,
embedding=myEmbedding,
distance_metric='l2',
score_threshold=0.4,
num_rows_to_fetch=12,
table_name_prefix=f'semantic_cache_{llmProvider}_',
)
This cache builds on the same database table as the previous one, as can be seen e.g. with:
lookup = anotherCassSemanticCache.lookup_with_id_through_llm(
LOGIC_QUESTION_FORM_2,
llm,
)
if lookup:
docId, response = lookup
print(docId)
print(response)
else:
print('No match.')
77add13036bcaa23c74ebf2ab2c56441 [Generation(text='\n\nNo, absence of proof is not the same as proof of absence. Absence of proof means that there is no evidence to support a claim, while proof of absence means that there is evidence to support the claim that something does not exist.', generation_info={'finish_reason': 'stop', 'logprobs': None})]
Stale entry control¶
Time-To-Live (TTL)¶
You can configure a time-to-live property of the cache, with the effect of automatic eviction of cached entries after a certain time.
Setting langchain.llm_cache
to the following will have the effect that entries vanish in an hour:
cacheWithTTL = CassandraSemanticCache(
session=session,
keyspace=keyspace,
embedding=myEmbedding,
ttl_seconds=3600,
table_name_prefix=f'semantic_cache_{llmProvider}_',
)
Manual cache eviction¶
Alternatively, you can invalidate individual entries one at a time, just like you saw for the exact-match CassandraCache
cache.
But this is an index based on sentence similarity, so this time the procedure has two steps: first, a lookup to find the id of the matching document:
lookup = cassSemanticCache.lookup_with_id_through_llm(SPIDER_QUESTION_FORM_1, llm)
if lookup:
docId, response = lookup
print(docId)
else:
print('No match.')
0a1339bc659790da078a4352c05bf422
you can see that querying for another form for the "same" question will result in the same id:
lookup2 = cassSemanticCache.lookup_with_id_through_llm(SPIDER_QUESTION_FORM_2, llm)
if lookup:
docId2, response2 = lookup2
print(docId2)
else:
print('No match.')
0a1339bc659790da078a4352c05bf422
and second, the document id is used in the actual cache eviction (again, you have to additionally provide the LLM):
cassSemanticCache.delete_by_document_id_through_llm(docId, llm)
As a check, try asking that question again:
%%time
llm(SPIDER_QUESTION_FORM_1)
CPU times: user 29.2 ms, sys: 2.77 ms, total: 32 ms Wall time: 3.62 s
'\n\nMost spiders have eight eyes, although some have fewer or more.'
Whole-cache deletion¶
Lastly, as you have seen earlier, you can empty the cache entirely, for a given LLM, with:
cassSemanticCache.clear_through_llm(llm=llm)