LangChain-Supercharging Large Language Models With LC and Vector DB

Building Document Question Answering using LLM, Langchain, Pinecone, Croma 🔗

Yashu Gupta
5 min readApr 26, 2023
Image Credit: LangChain

👨🏾‍💻Github | Linkedin | Chat GPT Detailed Architecture | Transformers

Large Language Models (LLM) such as OpenAI’s GPT are taking the world by storm. In particular, the release of ChatGPT has catapulted both AI and (transformer-based) LLM’s, which have been discussed for years. Sometime back LSTM and LSTM with Attention mechanism were SOTA models. With Release of Transformers there is huge shift towards Language Models. In my Recent post on Chat GPT Detailed Architecture ,We have covered the detailed intuition and methodology on Transformers and GPT3 based models. In this post we’ll focus on intuition and methodology of how these LLM’s can be integrated with powerful frameworks like Langchain and vector Stores (Pinecone, Croma, FAISS).

Introducing LangChain

LangChain is a powerful framework that can be used to work with many of the Large Language Models (LLM’s).Through Langchain we can build powerful applications. It does this by providing a deep integration of LLM with other sources of data, such as the internet or your personal files. But wait why it is important if LLM have all the knowledge -lets understand…

Why LangChain is important?

Large Language Models such as GPT-3 or ChatGPT are very useful on their own. For example, they are great at generating content! These models are generalized models, which means they can perform many tasks effectively with great accuracy. But sometimes they may not be able to provide specific answers to questions or tasks that require deep domain knowledge or expertise. For example, imagine you want to use an LLM to answer questions about a specific field, like healthcare or Insurance. While the LLM may be able to answer general questions about these fields, but it may not be able to provide detailed answers that require specialized knowledge. Also they are not able to provide those information which model doesn't have (foreg Chatgpt -Information after 2022 is not there).The other problem with these models are when we have long documents they are failed to process them due to the token limit issue or sequence length issue. These problems can be address using Langchain.

Langchain — A Rescuer

With the help of LangChain we can address these above problems. As langchain provide a deep integration of our data files with these large language models. We can create powerful application and domain information can also be injected to these LLM as a context. Also Langchain helps in handling the token limit issue by creating chunks of text. Then from these text we can create embedding that can be store to a vector Database(Pinecone, Croma).

There are many use cases which can be done using Langchain like Question Answering, Summarization, Chatbots etc. You can read more about general use-cases of LangChain over their documentation or their GitHub repo.

Document Question Answering using External Information

In this Post we will create a Powerful application using Langchain and LLM. We will try to create Revolutionary Document Question Answering based system by feeding some external files as a context to LLM. The model will be able to answer complex questions. Also we can create a application with multiple documents or multiple files and question answering can done on these files.

Architecture of Document based QA system

In the above architecture it is clear that few steps are involved while developing this application . Lets Drill down.

  1. Install the needed dependencies for our project. First we start with the installation of LangChain, openai, cromadb, tiktoken.
pip install langchain
pip install openai
pip install cromadb
pip install tiktoken

2. Connection to Open AI is required which can be done by creating a account at Open AI website . We will generate a secret key using the link https://platform.openai.com/account/api-keys

3. LangChain offers a useful approach where the corpus of text is preprocessed by breaking it down into chunks. Langchain provides loaders through which we can read the files and can creates chunks out of it.

from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)

3. Once the chunks are created, We will get embeddings for these text chunks. Langchain provides a direct Integration with Huggingface embeddings, OpenAI embeding etc. These can be download Directly

from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()

4. Once we have the embeddings for these text chunks these can be saved to a vector DB. There are many opensource Vector DB available like (Pinecone, CromaDB). Here we will use Croma db. for storing and indexing these dense vectors in vector DB.

from langchain.vectorstores import Chroma
db = Chroma.from_documents(texts, embeddings)

5. Once we have the knowledge base created. we will repeat the same process for the inference queries. Once we get the embedding for the inference query as shown in the architecture. Then semantic similarity will be calculated with respect to the knowledge base. We can get top k semantic similar chunks or their embedding from the vector space and these can be pass to LLM as context .

index= VectorstoreIndexCreator(
vectorstore_cls=Chroma,
embedding=OpenAIEmbeddings(),
text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
)

6. After getting the top k semantic similar embedding using cosine similarity. These embedding or chunks will be pass to LLM. Language models takes these embedding as context and can generate a valid output for the input queries.

Few output samples shown below — Here in the below snapshot the inputs are the research paper.The above steps performed on top of these PDF and question answering can be done using LLM

LangChain Building Blocks

  1. Models: LangChain offers support for various model types and model integrations. It enables you to easily integrate and work with different language models like hugging face and open AI, which enhance applications’ capabilities.
  2. Prompts: LangChain allows you to manage, serialize prompts efficiently. This helps in generating more accurate and contextually relevant responses from the language models.
  3. Memory: LangChain provides a standard interface for memory and a collection of memory implementations. It facilitates the persistence of state between calls in a chain or agent, enhancing the model’s knowledge and recall abilities.
  4. Indexes: To boost the power of language models, LangChain helps you effectively combine them with your own text data. It provides best practices for indexing and searching through your data sources.
  5. Chains: Chains are sequences of calls, either to language models or other utilities. LangChain offers a standard interface for chains, along with numerous integrations and end-to-end chains for common applications.
  6. Agents: Agents enable language models to make decisions, take actions, observe outcomes, and repeat the process until the objective is met. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.

Conclusion

Hopefully, by the end of this article, we will get to know about Langchain and how to build a question-answering PDF engine using LangChain, OpenAI and cromadb

Links , references and credits

  1. LangChain Docs : https://langchain.readthedocs.io/en/latest/index.html
  2. LangChain Prompt Template : https://langchain.readthedocs.io/en/latest/modules/prompts/getting_started.html#what-is-a-prompt-template
  3. LangChain SequentialChain Doc : https://langchain.readthedocs.io/en/latest/modules/chains/getting_started.html#combine-chains-with-the-sequentialchain

--

--