AI Conversational Retrieval
A workflow based on LangChain's conversational retrieval agents that I used in the AI bible research project to upload documents and ask them questions.
![Cover image for AI Conversational Retrieval](/_next/image?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Ftlr0v2qe%2Fproduction%2F1822266f0e3a105c4602cbae6a351916a1470f4b-4608x3072.jpg%3Fh%3D900%26max-w%3D1600%26q%3D100%26fit%3Dmax%26auto%3Dformat&w=3840&q=85)
Python Notebook Setup
In this guide, you'll see how to process text files into a vector database using embeddings.
Then you can ask questions and the chat agent will respond with relevant pieces of your docs as context.
This notebook serves as a prompt template testing kit.
Once you find a prompt you like, you can turn this into a looping script you can run in a terminal. Like this example.
Prerequisites
- LangChain and requisites installed. See LangChain installation docs
- A folder named
data
with the.txt
files you want to query or a single file, nameddata.txt
. - An API key from OpenAI
- A file named
constants.py
to store the API key
Then import the required libraries and the API key.
Next, create the vector store.
Persist = True
creates, if it doesn't exist, and reuses a vector store.
Persist = False
creates a new vector store each time.
Now, we'll tell it which model to use.
And run the chain.
The result...