Vector Similarity Search QA Quickstart¶
Set up a simple Question-Answering system with LangChain and CassIO, using Cassandra as the Vector Database.
NOTE: this uses Cassandra's "Vector Similarity Search" capability. Make sure you are connecting to a vector-enabled database for this demo.
from langchain.indexes import VectorstoreIndexCreator
from langchain.text_splitter import (
CharacterTextSplitter,
RecursiveCharacterTextSplitter,
)
from langchain.docstore.document import Document
from langchain.document_loaders import TextLoader
The following line imports the Cassandra flavor of a LangChain vector store:
from langchain.vectorstores.cassandra import Cassandra
A database connection is needed to access Cassandra. The following assumes that a vector-search-capable Astra DB instance is available. Adjust as needed.
from cqlsession import getCQLSession, getCQLKeyspace
cqlMode = 'astra_db' # 'astra_db'/'local'
session = getCQLSession(mode=cqlMode)
keyspace = getCQLKeyspace(mode=cqlMode)
Both an LLM and an embedding function are required.
Below is the logic to instantiate the LLM and embeddings of choice. We chose to leave it in the notebooks for clarity.
import os
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'GCP_VertexAI', 'OpenAI', 'Azure_OpenAI' ... manually if you have credentials)
if llmProvider == 'GCP_VertexAI':
from langchain.llms import VertexAI
from langchain.embeddings import VertexAIEmbeddings
llm = VertexAI()
myEmbedding = VertexAIEmbeddings()
print('LLM+embeddings from Vertex AI')
elif llmProvider == 'OpenAI':
os.environ['OPENAI_API_TYPE'] = 'open_ai'
from langchain.llms import OpenAI
from langchain.embeddings import OpenAIEmbeddings
llm = OpenAI(temperature=0)
myEmbedding = OpenAIEmbeddings()
print('LLM+embeddings from OpenAI')
elif llmProvider == 'Azure_OpenAI':
os.environ['OPENAI_API_TYPE'] = 'azure'
os.environ['OPENAI_API_VERSION'] = os.environ['AZURE_OPENAI_API_VERSION']
os.environ['OPENAI_API_BASE'] = os.environ['AZURE_OPENAI_API_BASE']
os.environ['OPENAI_API_KEY'] = os.environ['AZURE_OPENAI_API_KEY']
from langchain.llms import AzureOpenAI
from langchain.embeddings import OpenAIEmbeddings
llm = AzureOpenAI(temperature=0, model_name=os.environ['AZURE_OPENAI_LLM_MODEL'],
engine=os.environ['AZURE_OPENAI_LLM_DEPLOYMENT'])
myEmbedding = OpenAIEmbeddings(model=os.environ['AZURE_OPENAI_EMBEDDINGS_MODEL'],
deployment=os.environ['AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT'])
print('LLM+embeddings from Azure OpenAI')
else:
raise ValueError('Unknown LLM provider.')
LLM+embeddings from Vertex AI
A minimal example¶
The following is a minimal usage of the Cassandra vector store. The store is created and filled at once, and is then queried to retrieve relevant parts of the indexed text, which are then stuffed into a prompt finally used to answer a question.
The following creates an "index creator", which knows about the type of vector store, the embedding to use and how to preprocess the input text:
(Note: stores built with different embedding functions will need different tables. This is why we append the llmProvider
name to the table name in the next cell.)
table_name = 'vs_test1_' + llmProvider
index_creator = VectorstoreIndexCreator(
vectorstore_cls=Cassandra,
embedding=myEmbedding,
text_splitter=CharacterTextSplitter(
chunk_size=400,
chunk_overlap=0,
),
vectorstore_kwargs={
'session': session,
'keyspace': keyspace,
'table_name': table_name,
},
)
Loading a local text (a short story by E. A. Poe will do)
loader = TextLoader('texts/amontillado.txt', encoding='utf8')
This takes a few seconds to run, as it must calculate embedding vectors for a number of chunks of the input text:
# Note: Certain LLM providers need workaround to evaluate batch embeddings
# (as done in next cell).
# As of 2023-06-29, Azure OpenAI would error with:
# "InvalidRequestError: Too many inputs. The max number of inputs is 1"
if llmProvider == 'Azure_OpenAI':
from langchain.indexes.vectorstore import VectorStoreIndexWrapper
docs = loader.load()
subdocs = index_creator.text_splitter.split_documents(docs)
#
print(f'subdocument {0} ...', end=' ')
vs = index_creator.vectorstore_cls.from_documents(
subdocs[:1],
index_creator.embedding,
**index_creator.vectorstore_kwargs,
)
print('done.')
for sdi, sd in enumerate(subdocs[1:]):
print(f'subdocument {sdi+1} ...', end=' ')
vs.add_texts(texts=[sd.page_content], metadata=[sd.metadata])
print('done.')
#
index = VectorStoreIndexWrapper(vectorstore=vs)
if llmProvider != 'Azure_OpenAI':
index = index_creator.from_loaders([loader])
Check what's on DB¶
By way of demonstration, if you were to directly read the rows stored in your database table, this is what you would now find there (not that you'll ever have to, for LangChain and CassIO provide an abstraction on top of that):
cqlSelect = f'SELECT * FROM {keyspace}.{table_name} LIMIT 3;' # (Not a production-optimized query ...)
rows = session.execute(cqlSelect)
for row_i, row in enumerate(rows):
print(f'\nRow {row_i}:')
print(f' document_id: {row.document_id}')
print(f' embedding_vector: {str(row.embedding_vector)[:64]} ...')
print(f' document: {row.document[:64]} ...')
print(f' metadata_blob: {row.metadata_blob}')
print('\n...')
Row 0: document_id: 21fbd9985564f7f12ac51f4c20232d75 embedding_vector: [-0.011485965922474861, -0.01858605071902275, 0.0115145826712250 ... document: "Pass your hand," I said, "over the wall; you cannot help feelin ... metadata_blob: {"source": "texts/amontillado.txt"} Row 1: document_id: f5020721820969b3fbf6b12691818508 embedding_vector: [0.011451096273958683, -0.006945343688130379, -0.007215586956590 ... document: No answer still. I thrust a torch through the remaining apertur ... metadata_blob: {"source": "texts/amontillado.txt"} Row 2: document_id: d2ff9ab96b181d455481c67f84558091 embedding_vector: [-0.0056611113250255585, -0.0022278032265603542, 0.0493778288364 ... document: I said to him--"My dear Fortunato, you are luckily met. How rem ... metadata_blob: {"source": "texts/amontillado.txt"} ...
Ask a question, get an answer¶
query = "Who is Luchesi?"
index.query(query, llm=llm)
'Luchesi is a wine critic.'
Spawning a "retriever" from the index¶
You just saw how easily you can plug a Cassandra-backed Vector Index into a full question-answering LangChain pipeline.
But you can as easily work at a slightly lower level: the following code spawns a VectorStoreRetriever
from the index for manual retrieval of documents related to a given query text. The results are instances of LangChain's Document
class.
retriever = index.vectorstore.as_retriever(search_kwargs={
'k': 2,
})
retriever.get_relevant_documents(
"Check the motto of the Montresors"
)
[Document(page_content='He raised it to his lips with a leer. He paused and nodded to me\nfamiliarly, while his bells jingled.\n\n"I drink," he said, "to the buried that repose around us."\n\n"And I to your long life."\n\nHe again took my arm, and we proceeded.\n\n"These vaults," he said, "are extensive."\n\n"The Montresors," I replied, "were a great and numerous family."\n\n"I forget your arms."', metadata={'source': 'texts/amontillado.txt'}), Document(page_content='"A huge human foot d\'or, in a field azure; the foot crushes a serpent\nrampant whose fangs are imbedded in the heel."\n\n"And the motto?"\n\n"_Nemo me impune lacessit_."\n\n"Good!" he said.', metadata={'source': 'texts/amontillado.txt'})]