DEV Community

Cover image for Chat with your PDF using Pinata,OpenAI and Streamlit
Jagroop Singh
Jagroop Singh

Posted on • Updated on

Chat with your PDF using Pinata,OpenAI and Streamlit

In this tutorial, we’ll build a simple chat interface that allows users to upload a PDF, retrieve its content using OpenAI’s API, and display the responses in a chat-like interface using Streamlit. We will also leverage @pinata to upload and store the PDF files.

Let's have a little glance at what we are building before moving forward:

Prerequisites :

  • Basic knowledge of Python
  • Pinata API key (for uploading PDFs)
  • OpenAI API key (for generating responses)
  • Streamlit installed (for building the UI)

Step 1: Project Setup

Start by creating a new Python project directory:

mkdir chat-with-pdf
cd chat-with-pdf
python3 -m venv venv
source venv/bin/activate
pip install streamlit openai requests PyPDF2
Enter fullscreen mode Exit fullscreen mode

Now, create a .env file in the root of your project and add the following environment variables:

PINATA_API_KEY=<Your Pinata API Key>
PINATA_SECRET_API_KEY=<Your Pinata Secret Key>
OPENAI_API_KEY=<Your OpenAI API Key>
Enter fullscreen mode Exit fullscreen mode

One have to manage OPENAI_API_KEY by own as it's paid.But let's go through the process of creating api keys in Pinita.

So, before proceeding further let us know what Pinata is why we are using it.

Pinata

Pinata is a service that provides a platform for storing and managing files on IPFS (InterPlanetary File System), a decentralized and distributed file storage system.

  • Decentralized Storage: Pinata helps you store files on IPFS, a decentralized network.
  • Easy to Use: It provides user-friendly tools and APIs for file management.
  • File Availability: Pinata keeps your files accessible by "pinning" them on IPFS.
  • NFT Support: It's great for storing metadata for NFTs and Web3 apps.
  • Cost-Effective: Pinata can be a cheaper alternative to traditional cloud storage.

Let's create required tokens by signin:

token1

Next step is to verify your registered email :

token2

After verifying signin to generate api keys :

token3

After that go to API Key Section and Create New API Keys:

token4

Finally, keys are successfully generated.Please copy that keys and save it in your code editor.

token5

OPENAI_API_KEY=<Your OpenAI API Key>
PINATA_API_KEY=dfc05775d0c8a1743247
PINATA_SECRET_API_KEY=a54a70cd227a85e68615a5682500d73e9a12cd211dfbf5e25179830dc8278efc

Enter fullscreen mode Exit fullscreen mode

Step 2: PDF Upload using Pinata

We’ll use Pinata’s API to upload PDFs and get a hash (CID) for each file. Create a file named pinata_helper.py to handle the PDF upload.

import os  # Import the os module to interact with the operating system
import requests  # Import the requests library to make HTTP requests
from dotenv import load_dotenv  # Import load_dotenv to load environment variables from a .env file

# Load environment variables from the .env file
load_dotenv()

# Define the Pinata API URL for pinning files to IPFS
PINATA_API_URL = "https://api.pinata.cloud/pinning/pinFileToIPFS"

# Retrieve Pinata API keys from environment variables
PINATA_API_KEY = os.getenv("PINATA_API_KEY")
PINATA_SECRET_API_KEY = os.getenv("PINATA_SECRET_API_KEY")

def upload_pdf_to_pinata(file_path):
    """
    Uploads a PDF file to Pinata's IPFS service.

    Args:
        file_path (str): The path to the PDF file to be uploaded.

    Returns:
        str: The IPFS hash of the uploaded file if successful, None otherwise.
    """
    # Prepare headers for the API request with the Pinata API keys
    headers = {
        "pinata_api_key": PINATA_API_KEY,
        "pinata_secret_api_key": PINATA_SECRET_API_KEY
    }

    # Open the file in binary read mode
    with open(file_path, 'rb') as file:
        # Send a POST request to Pinata API to upload the file
        response = requests.post(PINATA_API_URL, files={'file': file}, headers=headers)

        # Check if the request was successful (status code 200)
        if response.status_code == 200:
            print("File uploaded successfully")  # Print success message
            # Return the IPFS hash from the response JSON
            return response.json()['IpfsHash']
        else:
            # Print an error message if the upload failed
            print(f"Error: {response.text}")
            return None  # Return None to indicate failure

Enter fullscreen mode Exit fullscreen mode

Step 3: Setting up OpenAI
Next, we’ll create a function that uses the OpenAI API to interact with the text extracted from the PDF. We’ll leverage OpenAI’s gpt-4o or gpt-4o-mini model for chat responses.

Create a new file openai_helper.py:

import os
from openai import OpenAI
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()

# Initialize OpenAI client with the API key
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
client = OpenAI(api_key=OPENAI_API_KEY)

def get_openai_response(text, pdf_text):
    try:
        # Create the chat completion request
        print("User Input:", text)
        print("PDF Content:", pdf_text)  # Optional: for debugging

        # Combine the user's input and PDF content for context
        messages = [
            {"role": "system", "content": "You are a helpful assistant for answering questions about the PDF."},
            {"role": "user", "content": pdf_text},  # Providing the PDF content
            {"role": "user", "content": text}  # Providing the user question or request
        ]

        response = client.chat.completions.create(
            model="gpt-4",  # Use "gpt-4" or "gpt-4o mini" based on your access
            messages=messages,
            max_tokens=100,  # Adjust as necessary
            temperature=0.7  # Adjust to control response creativity
        )

        # Extract the content of the response
        return response.choices[0].message.content  # Corrected access method
    except Exception as e:
        return f"Error: {str(e)}"

Enter fullscreen mode Exit fullscreen mode

Step 4: Building the Streamlit Interface

Now that we have our helper functions ready, it’s time to build the Streamlit app that will upload PDFs, fetch responses from OpenAI, and display the chat.

Create a file named app.py:

import streamlit as st
import os
import time
from pinata_helper import upload_pdf_to_pinata
from openai_helper import get_openai_response
from PyPDF2 import PdfReader
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

st.set_page_config(page_title="Chat with PDFs", layout="centered")

st.title("Chat with PDFs using OpenAI and Pinata")

uploaded_file = st.file_uploader("Upload your PDF", type="pdf")

# Initialize session state for chat history and loading state
if "chat_history" not in st.session_state:
    st.session_state.chat_history = []
if "loading" not in st.session_state:
    st.session_state.loading = False

if uploaded_file is not None:
    # Save the uploaded file temporarily
    file_path = os.path.join("temp", uploaded_file.name)
    with open(file_path, "wb") as f:
        f.write(uploaded_file.getbuffer())

    # Upload PDF to Pinata
    st.write("Uploading PDF to Pinata...")
    pdf_cid = upload_pdf_to_pinata(file_path)

    if pdf_cid:
        st.write(f"File uploaded to IPFS with CID: {pdf_cid}")

        # Extract PDF content
        reader = PdfReader(file_path)
        pdf_text = ""
        for page in reader.pages:
            pdf_text += page.extract_text()

        if pdf_text:
            st.text_area("PDF Content", pdf_text, height=200)

            # Allow user to ask questions about the PDF
            user_input = st.text_input("Ask something about the PDF:", disabled=st.session_state.loading)

            if st.button("Send", disabled=st.session_state.loading):
                if user_input:
                    # Set loading state to True
                    st.session_state.loading = True

                    # Display loading indicator
                    with st.spinner("AI is thinking..."):
                        # Simulate loading with sleep (remove in production)
                        time.sleep(1)  # Simulate network delay
                        # Get AI response
                        response = get_openai_response(user_input, pdf_text)

                    # Update chat history
                    st.session_state.chat_history.append({"user": user_input, "ai": response})

                    # Clear the input box after sending
                    st.session_state.input_text = ""

                    # Reset loading state
                    st.session_state.loading = False

            # Display chat history
            if st.session_state.chat_history:
                for chat in st.session_state.chat_history:
                    st.write(f"**You:** {chat['user']}")
                    st.write(f"**AI:** {chat['ai']}")

                # Auto-scroll to the bottom of the chat
                st.write("<style>div.stChat {overflow-y: auto;}</style>", unsafe_allow_html=True)

                # Add three dots as a loading indicator if still waiting for response
                if st.session_state.loading:
                    st.write("**AI is typing** ...")

        else:
            st.error("Could not extract text from the PDF.")
    else:
        st.error("Failed to upload PDF to Pinata.")

Enter fullscreen mode Exit fullscreen mode

Step 5: Running the App

To run the app locally, use the following command:

streamlit run app.py
Enter fullscreen mode Exit fullscreen mode

Our file is successfully uploaded in Pinata Platform :
final uploading

Step 6: Explaining the Code

Pinata Upload

  • The user uploads a PDF file, which is temporarily saved locally and uploaded to Pinata using the upload_pdf_to_pinata function. Pinata returns a hash (CID), which represents the file stored on IPFS.

PDF Extraction

  • Once the file is uploaded, the content of the PDF is extracted using PyPDF2. This text is then displayed in a text area.

OpenAI Interaction

  • The user can ask questions about the PDF content using the text input. The get_openai_response function sends the user’s query along with the PDF content to OpenAI, which returns a relevant response.

Final code is available in this github repo :
https://github.com/Jagroop2001/chat-with-pdf

That's all for this blog! Stay tuned for more updates and keep building amazing apps! 💻✨
Happy coding! 😊

Top comments (58)

Collapse
 
john12 profile image
john

@jagroop2001 ,
Wow, I didn't know we could create such an AI project so easily! Integrating Pinata, OpenAI, and Streamlit opens up so many possibilities for building interactive applications.
I would try with image to text generation. Can you suggest is it feasible ?

Collapse
 
jagroop2001 profile image
Jagroop Singh

@john12 ,
Creating an image-to-text generation application using OpenAI's models is definitely possible! The advanced features of models like GPT-4 and DALL-E make this a realistic and exciting project.

Collapse
 
john12 profile image
john

@jagroop2001 ,Thank you so much! 🙌✨ I'm really excited about this project! 😊

Thread Thread
 
jagroop2001 profile image
Jagroop Singh

@john12 , let me know if you need any help in your project.

Thread Thread
 
john12 profile image
john

@jagroop2001 , sure thanks

Collapse
 
anh_ta_ec592abca466578198 profile image
Anh Ta

Great tutorial @jagroop2001 !

Question: How do I restrict the PDFs to files that I have already preloaded? Basically, disallow users from uploading their own documents and only allowing chat with my PDFs.

Collapse
 
jagroop2001 profile image
Jagroop Singh • Edited

@anh_ta_ec592abca466578198 ,
To restrict users to only chat with preloaded PDFs, you can disable the file upload functionality and instead provide a list or dropdown menu of the available PDFs ( Yes you can manually upload files on @pinata server
directly)
. When a user selects a PDF from the list, you can load that specific document for processing. This ensures they can only access and interact with the PDFs you have preloaded, preventing them from uploading any of their own files.

Collapse
 
anh_ta_ec592abca466578198 profile image
Anh Ta

Thank you! I'll try it.

Thread Thread
 
jagroop2001 profile image
Jagroop Singh
Collapse
 
works profile image
Web

@jagroop2001
Whoa! Fantastic Project using OpenAI and Pinata. I've tried this and it works well.
Your API keys aren't functioning, by the way. I attempted to utilize this

Collapse
 
jagroop2001 profile image
Jagroop Singh

@works ,
yes because I have shown this for demo purpose after that I delete the @pinata keys and regenerated new ones.

Collapse
 
works profile image
Web

@jagroop2001 , got it.
Can you guide me that how I would build a platform like that where code file uploaded and OpenAI generate code review of it and also provide optimized code correction.

Is this possible with OPEN AI

Thread Thread
 
jagroop2001 profile image
Jagroop Singh

@works ,
Yes, it's possible to build a platform that allows code file uploads, with OpenAI generating code reviews and offering optimised corrections. You can achieve this by integrating OpenAI's API or Gemini API ( which is free) or Open Source Model for code analysis and Pinata for secure file storage, all within a React-based front-end.

I'm already working on this exact problem statement and plan to publish the project within a few days, using Pinata, OpenAI, React, and other technologies.

Thread Thread
 
works profile image
Web

Wow , I will be waiting for this as this would really help me to learn with your code refence. @jagroop2001

Collapse
 
hraifi profile image
sewiko • Edited

I didn't get any mail from pinata due to this I am not able to continue with this. @jagroop2001

Image description

Collapse
 
jagroop2001 profile image
Jagroop Singh

Please reach out to @pinata team through this email team@pinata.cloud or try with different email account. @hraifi

Collapse
 
hraifi profile image
sewiko

@jagroop2001 thanks, worked for my different email account.

Image description

Thread Thread
 
jagroop2001 profile image
Jagroop Singh

Oh great !!

Collapse
 
martinbaun profile image
Martin Baun

I highly recommend you take a look at LangChain :)

Collapse
 
jagroop2001 profile image
Jagroop Singh

Sure @martinbaun , any resources ??

Collapse
 
jagroop2001 profile image
Jagroop Singh

@martinbaun , are you pointing to built this project using Langchain using RAG's ?

Collapse
 
alonsoir profile image
@alonso_isidoro

This example is not going to prevent hallucinations, nor are they going to be indexed. In my opinion, having the files in IPFS is a good idea to keep the files in a secondary location before loading them into something like FAISS or some other vector database with a suitable index applied. PDF files need to be processed, they may contain images and tables and these need to be indexed as well.

Collapse
 
jagroop2001 profile image
Jagroop Singh • Edited

@alonsoir ,
I think combining IPFS for storage and a solid indexing approach with vector databases will create a more reliable system for handling diverse content types within PDFs.

Collapse
 
zh2332926 profile image
flydog259

pinata is not the best choice.

Collapse
 
jagroop2001 profile image
Jagroop Singh

What would you recommend @zh2332926

Collapse
 
zh2332926 profile image
flydog259

Arweave

Collapse
 
paxnw profile image
caga

Sounds interesting @jagroop2001 ,
Why Pinata when we can store that in any 3rd party bucket or even in backend public folder ?

Collapse
 
jagroop2001 profile image
Jagroop Singh

@paxnw ,
@pinata is great because it leverages IPFS, giving files a decentralized home that's secure, accessible.
Unlike a typical backend folder or cloud storage, IPFS ensures that files are immutable and distributed, reducing dependency on any single server.
This can boost performance, especially for apps that need reliable file access across multiple locations. Plus, Pinata’s API makes integration and file management a breeze!

Collapse
 
paxnw profile image
caga

sound's conveniencing !! @jagroop2001

Thread Thread
 
paxnw profile image
caga

@jagroop2001 , while running got the error :

OpenAI's Response: Error:

You tried to access openai.Completion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run openai migrate to automatically upgrade your codebase to use the 1.0.0 interface.

Alternatively, you can pin your installation to the old version, e.g. pip install openai==0.28

A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
Enter fullscreen mode Exit fullscreen mode

Do you know how to resolve this ? I tried to find online but doesn't work .

Thread Thread
 
jagroop2001 profile image
Jagroop Singh

@paxnw ,Yes, I have already experienced it. Fortunately, I know how to fix it:

If you want to upgrade your codebase to be compatible with the new version, you can run:

openai migrate
Enter fullscreen mode Exit fullscreen mode

OR
If you prefer to keep using the old version until you're ready to migrate, you can pin your installation:
pip install openai==0.28

Thread Thread
 
paxnw profile image
caga

@jagroop2001 , this one worked for me :

openai migrate
Enter fullscreen mode Exit fullscreen mode
Collapse
 
iabdsam profile image
Abdul Samad

good idea

Collapse
 
jagroop2001 profile image
Jagroop Singh

Thanks @iabdsam

Collapse
 
femi_akinyemi profile image
Femi Akinyemi

But how do we test it outside the local Machine?

Collapse
 
jagroop2001 profile image
Jagroop Singh

@femi_akinyemi , one can either deploy it on streamlit or any other third party server like aws , google cloud , real cloud etc.