DEV Community

Cover image for Create an end-to-end personalised AI chatbot🤖 using Llama-3.1🦙 and Streamlit powered by Groq API
Debapriya Das
Debapriya Das

Posted on

Create an end-to-end personalised AI chatbot🤖 using Llama-3.1🦙 and Streamlit powered by Groq API

In this tutorial, we'll build and deploy a personalised AI-powered chat application using Streamlit and the latest AI model llama-3.1-8b-instant. We'll use Groq for faster inference. Also we are going to deploy it for free!
We'll take you through the code, explaining each section and providing useful tips for customization.

Getting Started

First sign in to https://groq.com/ and click start building
Groq API

Click on Create API key then create a new key, copy it and keep it somewhere safe.

Groq API Key

Now install the necessary libraries:
Create the requirements.txt file and paste this



groq==0.9.0
streamlit==1.37.0
python-dotenv


Enter fullscreen mode Exit fullscreen mode

Install these using



pip install -r requirements.txt


Enter fullscreen mode Exit fullscreen mode

Let's create our main.py file and import the required libraries:



import os
from dotenv import dotenv_values
import streamlit as st
from groq import Groq


Enter fullscreen mode Exit fullscreen mode

We'll use streamlit for building the chat interface, dotenv for handling environment variables, and groq for fast inference from the AI model.

Configuring the Page

Let's set up the page configuration using Streamlit:



st.set_page_config(
    page_title="The Tech Buddy ",
    page_icon="",
    layout="centered",
)


Enter fullscreen mode Exit fullscreen mode

This will give our chat application a professional look and feel.

Handling Environment Variables

We'll use environment variables to store sensitive information like API keys and and the application specific prompts.
In your root folder create a .env file like this:



GROQ_API_KEY='YOUR_GROQ_API_KEY'

INITIAL_RESPONSE="Enter what you want to show as the first response of your bot, example: Hello! my friend I am a painter from 70's. Whatsup?"

CHAT_CONTEXT="Enter how do you want to personalize your chatbot, example: You are a painter from the 70's and you are respond sentences with painting references.(This is for the system)"

INITIAL_MSG="Enter the first message from the assistant to initiate the chat history, example: Hey there! I know everything about painting, ask me anything.(This is for the assistant)"


Enter fullscreen mode Exit fullscreen mode

This part is crucial to personalize your application as per your need. So play with it and explore.

Now configure this environment variables in our python file:



try:
    secrets = dotenv_values(".env")  # for dev env
    GROQ_API_KEY = secrets["GROQ_API_KEY"]
except:
    secrets = st.secrets  # for streamlit deployment
    GROQ_API_KEY = secrets["GROQ_API_KEY"]

# Save the API key to environment variable
os.environ["GROQ_API_KEY"] = GROQ_API_KEY

INITIAL_RESPONSE = secrets["INITIAL_RESPONSE"]
INITIAL_MSG = secrets["INITIAL_MSG"]
CHAT_CONTEXT = secrets["CHAT_CONTEXT"]


Enter fullscreen mode Exit fullscreen mode

In the try block we are getting the environment variables from the .env file to run it and test it locally.
But when we'll deploy it using streamlit we will not get any access of the .env file. So that time we will store our secrets using streamlit and to access those secrets we will use st.secretes that returns a python dict, same like dotenv_values(".env"). So after deployment the except block gets executed.

Initializing the Chat Application

Let's set up the chat history and initialize the AI model:

Groq supported models

I used llama-3.1-8b-instant for my project.

  • Initialize your model: ```python

Initialize the chat history if present as Streamlit session

if "chat_history" not in st.session_state:
st.session_state.chat_history = [
{"role": "assistant",
"content": INITIAL_RESPONSE
},
]

client = Groq()

We'll store the chat history in the st.session_state object, which allows us to persist data across session refreshes.

## Displaying the Chat Application
Let's create the chat interface using Streamlit:
```python


# Page title
st.title("Hey Buddy!")
st.caption("Let's go back in time...")

# Display chat history
for message in st.session_state.chat_history:
    with st.chat_message("role", avatar=''):
        st.markdown(message["content"])


Enter fullscreen mode Exit fullscreen mode

We'll use the st.chat_message function to display each message in the chat history.

User Input Field

Let's create a text input field for the user to enter their question:



user_prompt = st.chat_input("Let's chat!")


Enter fullscreen mode Exit fullscreen mode

When the user submits their prompt, we'll append it to the chat history and generate a response from the AI model.

Generating a Response from the AI Model

Let's create a response from the AI model using the Groq library:



def parse_groq_stream(stream):
    for chunk in stream:
        if chunk.choices:
            if chunk.choices[0].delta.content is not None:
                yield chunk.choices[0].delta.content

if user_prompt:
    with st.chat_message("user", avatar=""):
        st.markdown(user_prompt)
    st.session_state.chat_history.append(
        {"role": "user", "content": user_prompt})

    messages = [
        {"role": "system", "content": CHAT_CONTEXT
         },
        {"role": "assistant", "content": INITIAL_MSG},
        *st.session_state.chat_history
    ]

    stream = client.chat.completions.create(
        model="llama-3.1-8b-instant",
        messages=messages,
        stream=True  # for streaming the message
    )
    response = st.write_stream(parse_groq_stream(stream))
    st.session_state.chat_history.append(
        {"role": "assistant", "content": response})


Enter fullscreen mode Exit fullscreen mode

We'll use the client.chat.completions.create() method to generate a steam and then parse it to a actual response from the AI model, and then append it to the chat history.

Run it locally

Congratulations! You've built a personalised AI-powered chat application using Streamlit, Groq, and a llama-3.1-8b-instant model.

Here is the whole main.py file:



import os
from dotenv import dotenv_values
import streamlit as st
from groq import Groq


def parse_groq_stream(stream):
    for chunk in stream:
        if chunk.choices:
            if chunk.choices[0].delta.content is not None:
                yield chunk.choices[0].delta.content


# streamlit page configuration
st.set_page_config(
    page_title="The 70's Painter",
    page_icon="🎨",
    layout="centered",
)


try:
    secrets = dotenv_values(".env")  # for dev env
    GROQ_API_KEY = secrets["GROQ_API_KEY"]
except:
    secrets = st.secrets  # for streamlit deployment
    GROQ_API_KEY = secrets["GROQ_API_KEY"]

# save the api_key to environment variable
os.environ["GROQ_API_KEY"] = GROQ_API_KEY

INITIAL_RESPONSE = secrets["INITIAL_RESPONSE"]
INITIAL_MSG = secrets["INITIAL_MSG"]
CHAT_CONTEXT = secrets["CHAT_CONTEXT"]


client = Groq()

# initialize the chat history if present as streamlit session
if "chat_history" not in st.session_state:
    # print("message not in chat session")
    st.session_state.chat_history = [
        {"role": "assistant",
         "content": INITIAL_RESPONSE
         },
    ]

# page title
st.title("Hey Buddy!")
st.caption("Let's go back in time...")
# the messages in chat_history will be stored as {"role":"user/assistant", "content":"msg}
# display chat history
for message in st.session_state.chat_history:
    # print("message in chat session")
    with st.chat_message("role", avatar='🤖'):
        st.markdown(message["content"])


# user input field
user_prompt = st.chat_input("Ask me")

if user_prompt:
    # st.chat_message("user").markdown
    with st.chat_message("user", avatar="🗨️"):
        st.markdown(user_prompt)
    st.session_state.chat_history.append(
        {"role": "user", "content": user_prompt})

    # get a response from the LLM
    messages = [
        {"role": "system", "content": CHAT_CONTEXT
         },
        {"role": "assistant", "content": INITIAL_MSG},
        *st.session_state.chat_history
    ]

    # Display assistant response in chat message container
    with st.chat_message("assistant", avatar='🤖'):
        stream = client.chat.completions.create(
            model="llama-3.1-8b-instant",
            messages=messages,
            stream=True  # for streaming the message
        )
        response = st.write_stream(parse_groq_stream(stream))
    st.session_state.chat_history.append(
        {"role": "assistant", "content": response})



Enter fullscreen mode Exit fullscreen mode

To run it locally enter the following command in your terminal:



streamlit run main.py

Enter fullscreen mode Exit fullscreen mode




Deployment

We are now all set to deploy our app.
First upload the codebase in a GitHub repository.
Then click here to sign in to your streamlit account and go to My Apps section:

  • Click on Create app at the upper right corner.
    Create streamlit app

  • Click on first option:
    streamlit create app

  • Locate your github repository:
    streamlit create app - locate your github repo

  • Locate the your main.py file:
    streamlit create app - locate your app file

  • Create a custom url for your deployed app(optional):
    streamlit create app - create custom url

  • Click on additional settings and paste everything from your .env file (this is the st.secrets):
    streamlit create app - configure the secrets

  • Click on deploy:
    streamlit create app - deploy

Congrats! you have successfully deployed your own personalised AI app for free.

Conclusion

This tutorial should give you a solid foundation for creating engaging chat interfaces in your future projects.

Additional Tips

  • Make sure to handle errors and exceptions properly to provide a smooth user experience.
  • Customize the chat interface to fit your application's theme and design.
  • Experiment with different AI models and fine-tune the chat application to improve its accuracy and performance.
  • I hope this tutorial has been helpful! If you have any questions or need further clarification on any section, feel free to ask in the comments below.

Recourses

Click here to checkout my implementation of a personalized DSA instructor app https://the-tech-buddy.streamlit.app/
Source code: https://github.com/Debapriya-source/llama-3.1-chatbot

Top comments (0)