Caching LLM responses¶
This notebook demonstrates how to use Cassandra for a basic prompt/response cache.
Such a cache prevents running an LLM invocation more than once for the very same prompt, thus saving on latency and token usage. The cache retrieval logic is based on an exact match, as will be shown.
from langchain.cache import CassandraCache
from cqlsession import getCQLSession, getCQLKeyspace
cqlMode = 'astra_db' # 'astra_db'/'local'
session = getCQLSession(mode=cqlMode)
keyspace = getCQLKeyspace(mode=cqlMode)
Create a CassandraCache
and configure it globally for LangChain:
import langchain
langchain.llm_cache = CassandraCache(
session=session,
keyspace=keyspace,
)
langchain.llm_cache.clear()
Below is the logic to instantiate the LLM of choice. We chose to leave it in the notebooks for clarity.
import os
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'GCP_VertexAI', 'OpenAI', 'Azure_OpenAI' ... manually if you have credentials)
if llmProvider == 'GCP_VertexAI':
from langchain.llms import VertexAI
llm = VertexAI()
print('LLM from Vertex AI')
elif llmProvider == 'OpenAI':
os.environ['OPENAI_API_TYPE'] = 'open_ai'
from langchain.llms import OpenAI
llm = OpenAI()
print('LLM from OpenAI')
elif llmProvider == 'Azure_OpenAI':
os.environ['OPENAI_API_TYPE'] = 'azure'
os.environ['OPENAI_API_VERSION'] = os.environ['AZURE_OPENAI_API_VERSION']
os.environ['OPENAI_API_BASE'] = os.environ['AZURE_OPENAI_API_BASE']
os.environ['OPENAI_API_KEY'] = os.environ['AZURE_OPENAI_API_KEY']
from langchain.llms import AzureOpenAI
llm = AzureOpenAI(temperature=0, model_name=os.environ['AZURE_OPENAI_LLM_MODEL'],
engine=os.environ['AZURE_OPENAI_LLM_DEPLOYMENT'])
print('LLM from Azure OpenAI')
else:
raise ValueError('Unknown LLM provider.')
LLM from OpenAI
%%time
SPIDER_QUESTION_FORM_1 = "How many eyes do spiders have?"
# The first time, it is not yet in cache, so it should take longer
llm(SPIDER_QUESTION_FORM_1)
CPU times: user 13.4 ms, sys: 0 ns, total: 13.4 ms Wall time: 3.65 s
'\n\nSpiders typically have eight eyes, although there are some species that have six or fewer eyes.'
%%time
# This time we expect a much shorter answer time
llm(SPIDER_QUESTION_FORM_1)
CPU times: user 2.04 ms, sys: 0 ns, total: 2.04 ms Wall time: 125 ms
'\n\nSpiders typically have eight eyes, although there are some species that have six or fewer eyes.'
%%time
SPIDER_QUESTION_FORM_2 = "How many eyes do spiders generally have?"
# This will again take 1-2 seconds, being a different string
llm(SPIDER_QUESTION_FORM_2)
CPU times: user 5.9 ms, sys: 0 ns, total: 5.9 ms Wall time: 3.5 s
'\n\nSpiders generally have eight eyes, though some species may have fewer or more.'
Stale entry control¶
Time-To-Live (TTL)¶
You can configure a time-to-live property of the cache, with the effect of automatic eviction of cached entries after a certain time.
Setting langchain.llm_cache
to the following will have the effect that entries vanish in an hour:
cacheWithTTL = CassandraCache(
session=session,
keyspace=keyspace,
ttl_seconds=3600,
)
Manual cache eviction¶
Alternatively, you can invalidate cached entries one at a time - for that, you'll need to provide the very LLM this entry is associated to:
%%time
llm(SPIDER_QUESTION_FORM_2)
CPU times: user 5.3 ms, sys: 299 µs, total: 5.6 ms Wall time: 122 ms
'\n\nSpiders generally have eight eyes, though some species may have fewer or more.'
langchain.llm_cache.delete_through_llm(SPIDER_QUESTION_FORM_2, llm)
%%time
llm(SPIDER_QUESTION_FORM_2)
CPU times: user 10.1 ms, sys: 3.1 ms, total: 13.2 ms Wall time: 2.07 s
'\n\nSpiders typically have eight eyes, although some species have six or fewer.'
Whole-cache deletion¶
As you might have seen at the beginning of this notebook, you can also clear the cache entirely: all stored entries, for all models, will be evicted at once:
langchain.llm_cache.clear()