DEV Community

Emmanuel Onwuegbusi
Emmanuel Onwuegbusi

Posted on

Build a Talk-to-your-data chatbot using openai LLM, LangChain, and Streamlit

LLMs can help enterprises build tools to allow users to query their internal data in natural languages or in a Q&A fashion.

This article will show you how to build a chatbot of your own data using LLM, LangChain, and Streamlit that can respond to users' support requests about your data.

Table of Contents:

  • Setup dev environment
  • vector_index directory
  • mydocument directory
  • chat_workflow.py
  • app.py
  • .gitignore
  • requirements.txt
  • Conclusion

Project Structure

The file structure of the project:

folder structure

Setup dev environment

  • Create a folder and open it with your code editor

  • Run the following command on your terminal to create a new virtual environment called .venv



python -m venv .venv


Enter fullscreen mode Exit fullscreen mode
  • Activate the virtual environment


.venv\Scripts\activate


Enter fullscreen mode Exit fullscreen mode
  • Install streamlit, langchain, openai tiktoken chromadb pypdf


pip install streamlit langchain openai tiktoken chromadb pypdf


Enter fullscreen mode Exit fullscreen mode

vector_index directory

Create a directory and name it vector_index. This directory will contain the vectors of your document.

mydocument directory

Create a mydocument directory. Place the PDF document you want the LLM to learn from inside this mydocument directory. For me, I placed this animalsinresearch.pdf document inside. The pdf document talks about why animals are used in scientific research and the tests that a chemical compound is supposed to pass before it can be used on animals. So, the idea is to build this chatbot to learn from this document and answer any questions about the document. You can place any PDF document you like in this directory. What it will mean is that your chatbot will be able to answer questions about your document.

chat_workflow.py

Create a chat_workflow.py file and add the following code:



import streamlit as st
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferWindowMemory
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders import PyPDFLoader
import os


@st.cache_resource
def chain_workflow(openai_api_key):

    #llm 
    llm_name = "gpt-3.5-turbo"

    # persist_directory
    persist_directory = 'vector_index/' 


    # Load OpenAI embedding model
    embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)


    # Check if the file exists
    if not os.path.exists("vector_index/chroma.sqlite3"):
        # If it doesn't exist, create it

        # load document
        file = "mydocument/animalsinresearch.pdf"
        loader = PyPDFLoader(file)
        documents = loader.load()

        # split documents
        text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=150)
        splits = text_splitter.split_documents(documents)

        # persist_directory
        persist_directory = 'vector_index/'

        vectordb = Chroma.from_documents(
            documents=splits,
            embedding=embeddings,
            persist_directory=persist_directory
        )


        vectordb.persist()
        print("Vectorstore created and saved successfully, The 'chroma.sqlite3' file has been created.")
    else:
        # if vectorstore already exist, just call it
        vectordb = Chroma(persist_directory=persist_directory, embedding_function=embeddings)


    # Load OpenAI chat model
    llm = ChatOpenAI(temperature=0, openai_api_key=openai_api_key)

    # specify a retrieval to retrieve relevant splits or documents
    compressor = LLMChainExtractor.from_llm(llm)
    compression_retriever = ContextualCompressionRetriever(base_compressor=compressor,base_retriever=vectordb.as_retriever(search_type="mmr", search_kwargs={"k": 3}))


    # Create memory 'chat_history' 
    memory = ConversationBufferWindowMemory(k=3,memory_key="chat_history")

    # create a chatbot chain
    qa = ConversationalRetrievalChain.from_llm(
        llm=ChatOpenAI(model_name=llm_name, temperature=0, openai_api_key=openai_api_key), 
        chain_type="map_reduce", 
        retriever=compression_retriever, 
        memory=memory,
        get_chat_history=lambda h : h,
        verbose=True
    )


    return qa


Enter fullscreen mode Exit fullscreen mode

Above, we declared chain_workflow() function with @st.cache_resource to cache chain_workflow() result to improve performance.

In chain_workflow() function, if the vectorstores are not created, we;

  • load the document
  • Create smaller splits of the documents
  • Create embeddings of those documents, and then we store all of those in a vector store. A vector store is a database where you can easily look up similar vectors later on. This will become useful when we're trying to find documents that are relevant for a question at hand.

When the user inputs a question to the chatbot,

  • We take the question at hand and the chat history
  • create an embedding of them
  • then do comparisons to all the different vectors in the vector store, and then pick the n most similar
  • We then take those n most similar chunks and pass them along with the question into an LLM, and get back an answer

app.py

Create an app.py file and add the following code:



import time
import streamlit as st
from chat_workflow import chain_workflow

# Custom image for the app icon and the assistant's avatar
assistant_logo = 'https://cdn.pixabay.com/photo/2016/06/28/13/51/dog-1484728_1280.png'

# Configure Streamlit page
st.set_page_config(
    page_title="Animals in Research",
    page_icon=assistant_logo
)

openai_api_key = st.sidebar.text_input('Input your OpenAI API Key', value="sk-", type = 'password')


# Initialize chat history
if 'messages' not in st.session_state:
    # Start with first message from assistant
    st.session_state['messages'] = [{"role": "assistant", 
                                  "content": "Hi user! ask me questions about animals in research"}]

for message in st.session_state.messages:
    if message["role"] == 'assistant':
        with st.chat_message(message["role"], avatar=assistant_logo):
            st.markdown(message["content"])
    else:
        with st.chat_message(message["role"]):
            st.markdown(message["content"])

# Chat logic
if query := st.chat_input("Ask me about animals in research"):
    if len(openai_api_key) <= 3:
        st.sidebar.error("☝️ Put in your openapi key")
    else:
        # Add user message to chat history
        st.session_state.messages.append({"role": "user", "content": query})
        # Display user message in chat message container
        with st.chat_message("user"):
            st.markdown(query)

        with st.chat_message("assistant", avatar=assistant_logo):
            message_placeholder = st.empty()
            # Send user's question to our chain

            # Initialize LLM chain
            chain = chain_workflow(openai_api_key=openai_api_key)
            result = chain({"question": query})
            response = result['answer']
            full_response = ""

            # Simulate stream of response with milliseconds delay
            for chunk in response.split():
                full_response += chunk + " "
                time.sleep(0.05)
                # Add a blinking cursor to simulate typing
                message_placeholder.markdown(full_response + "▌")
            message_placeholder.markdown(full_response)

        # Add assistant message to chat history
        st.session_state.messages.append({"role": "assistant", "content": response})


Enter fullscreen mode Exit fullscreen mode

Above we:

  • Import chain_workflow() from chat_workflow.py to load the Conversational Retrieval Chain we created earlier.

  • Load an image from a URL to use as your app's page icon and assistant's avatar in the chat app.

  • Specify text_input field to accept openai_api_key from the user

  • Initialize the chat history in the session state with a first message from the assistant welcoming the user.

  • Display all the messages of the chat history, specifying a custom avatar for the assistant and the default one for the user.

  • Create the chat logic to receive the user's query and store it in the chat history

  • Display the user's query in the chat

  • Pass the user's query to your chain using result = chain({"question": query})

  • Get a response back. Display the response in the chat, simulating a human typing speed by slowing down the display of the response

  • Store the response in the chat history

.gitignore

You can add the following to the .gitignore file:



__pycache__
mydocument/


Enter fullscreen mode Exit fullscreen mode

requirements.txt

Add the following to requirements.txt file:



openai
langchain
streamlit
tiktoken
chroma
pypdf


Enter fullscreen mode Exit fullscreen mode

Run the app

You can start the app using the following command:



streamlit run app.py



Enter fullscreen mode Exit fullscreen mode

In my case, you can see the conversation I had with the chatbot. Notice the chatbot took into account the previous question I asked.

animalsinresearch1

animalsinresearch2

Conclusion

Even though I used search_type="mmr" for the retriever and chain_type="map_reduce" for ConversationalRetrievalChain you should also try different values for these.

You can get the code here: https://github.com/emmakodes/talk-to-your-data

Note: You will need to input your openapi key. You can get it from here: https://platform.openai.com/account/api-keys

Top comments (0)