DEV Community

Building Interactive Applications with Amazon Bedrock, Amazon S3 and Streamlit

👉🏻 This is a step-by-step guide on how to build an interactive web application with render interactive elements and integrate Amazon S3 and Amazon Bedrock with the Streamlit application. With custom-designed and interactive web UI, we are able to showcase a complete data exploration application along with generative AI capabilities.
(Note: to better focus on the deployment process, some pre-requisite steps such as Amazon EC2 configuration are not covered in the guide)

  1. Use Case
  2. AWS Architecture
  3. Step-by-step guide on deployment
    • Connect to virtual machine using EC2 instance connect
    • Deploy the Streamlit Application to Amazon EC2
  4. Application Codes Tour
    • Build Streamlit Basic WebUI
    • Build an interactive file upload webpage
    • Build a generative AI image generator
  5. Conclusion

Streamlit is an open-source Python library that makes it easy to create and share custom web apps. Streamlit lets you transform Python scripts into interactive web apps in minutes, instead of weeks.

There are a number of use cases by building with Streamlit, such as:

  • Building dashboards and data apps
  • generate reports from large documents
  • or create generative AI chatbots.

With the current spiking trends on LLM applications, Streamlit allows develop to deliver dynamic interactive apps with only a few lines of code.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. [2]

Amazon Bedrock takes advantages of the latest generative AI innovations with easy access to a choice of high-performing foundation models FMs from leading AI companies, such as Meta, Mistral AI, Stability AI, and Amazon.

There are many different foundational models available in Amazon Bedrock including text, chat, and image. Model Evaluation on Amazon Bedrock allows you to use automatic and human evaluations to select FMs for a specific use case. To tailor to your own needs, you can go from generic models to ones that are specialized and customized for your business and use case. [3]

In this use case, we will utilize Amazon Titan models to build generative AI applications as well as Streamlit’s easy-to-deploy framework, we can easily develop our own data and AI products.

1. Use Case

In this use case, we are going to build an interactive web app which allows users to explore data, upload files and create generative AI photos.

We will embed Amazon Bedrock FMs to the application, deploy a Streamlit application to an Amazon EC2 instance, and allow users to begin interacting with the application.

2. AWS Architecture

In the development process, we will:

  • Deploy a Streamlit application to Amazon EC2
  • Render Streamlit elements such as chatbot function in a web application
  • Integrate Amazon S3 and Amazon Bedrock with a Streamlit application

Image description

3. Step-by-Step guide on deployment

There are a number of pre-requisite steps before deploying the actual Streamlit applications. Before starting below steps, you should have:

  • create an Amazon EC2 instance
  • setup an S3 bucket to store uploaded files
  • create a Github repository to hold Streamlit python codes

3.1 Connect to virtual machine using EC2 instance connect

In EC2, right click the pre-configured EC2 name, connect to an EC2 instance using EC2 Instance Connect and access a shell.

The following deployment process will be completed in the Shell environment.

Image description

3.2 Deploy the Streamlit Application to Amazon EC2

Note: there will be detailed walkthrough of the application code in section 4.

In the terminal, use the following set of commands to configure the AWS account credentials:

aws configure set aws_access_key_id <Your acess_key_id> &&
aws configure set aws_secret_access_key <Your acess_key> &&
aws configure set default.region <Your AWS region>
Enter fullscreen mode Exit fullscreen mode

In the above command, provide your AWS account credentials to be configured.

The following command will display the Amazon S3 bucket name required by the Streamlit application:

echo $BUCKET_NAME
Enter fullscreen mode Exit fullscreen mode

In this use case, the S3 bucket name is displayed:

Image description

Enter the following command to clone Github directory:

git clone https://github.com/<Your github directory>.git 
Enter fullscreen mode Exit fullscreen mode

You may setup different branches for your github repo to store application code.

To deploy the application, enter the following set of commands:

cd src/ && pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

Image description

This command installs the required Python packages for the Streamlit application.

To start the Streamlit application with webUI (python code walkthrough will be in the next section):

streamlit run Basics.py

Image description

This command starts the Streamlit application on the EC2 instance.
The Basics.py file is the main application file that you will run. Streamlit will utilize the pages/ directory to render the additional pages in a sidebar.
Copy the external URL, and open a new tab to start the Streamlit application.

4. Application Codes Tour

You may store the Streamlit application code is a src/ directory.

To run the application, first install the necessary libraries and listed them in the requirements.txt file.

streamlit
boto3
pandas
streamlit_pdf_viewer
Enter fullscreen mode Exit fullscreen mode
  • The streamlit library will be referenced in each of the application files. This library contains the UI elements that will render the various widgets and fields used in the application.
  • boto3 will be used to interact with the AWS services. To use this library, you must configure AWS credentials within the environment the Streamlit application is served.
  • The pandas library is used to present data within the application.
  • streamlit_pdf_viewer is a third-party, custom Streamlit component that allows you to control how PDF files are displayed.

4.1 Build Streamlit Basic WebUI

Refer to below Python code to create a simple basic WebUI that display

import streamlit as st
import pandas as pd
import random

# Streamlit page configuration
st.set_page_config(layout="wide", page_title="Streamlit Basics")
st.title("Streamlit Basics")

# Streamlit container
with st.container(border=True):
    # Tabs
    text, data, chat, markdown = st.tabs(["Text", "Data", "Chat", "Markdown Editor"])

    # Tab content

    # Distplaying text
    with text:
        st.title("Titles")
        st.divider()
        st.header("Headers")
        st.subheader("Subheaders")
        st.text("Normal text")
        st.markdown("***Markdown Text***")
        st.code("for i in range(8): print(i)")


    # Displaying data as a data frame
    with data:
        st.subheader("Display data using a data frame")
        df = pd.DataFrame(
            {
                "name": ["Spirited Away", "Princess Mononoke", "My Neighbor Totoro"],
                "url": ["https://m.imdb.com/title/tt0245429/", "https://m.imdb.com/title/tt0119698/", "https://m.imdb.com/title/tt0096283/"],
                "reviews": [random.randint(0, 1000) for _ in range(3)],
                "views_history": [[random.randint(0, 5000) for _ in range(30)] for _ in range(3)],
            }
        )
        st.dataframe(
            df,
            column_config={
                "name": "Title",
                "reviews": st.column_config.NumberColumn(
                    "Reviews",
                    help="Total number of reviews",
                    format="%d ⭐",
                ),
                "url": st.column_config.LinkColumn("IMDb page"),
                "views_history": st.column_config.LineChartColumn(
                    "Views (past 30 days)", y_min=0, y_max=5000
                ),
            },
            hide_index=True,
        )

    # Chat input and variables
    with chat:
        st.subheader("Enter in a prompt to the chat")
        prompt = st.chat_input("Say something")
        if prompt:
            st.write(f"You entered the following prompt: :blue[{prompt}]")

    # Displaying Markdown and editor
    with markdown:
        st.subheader("Edit and render Markdown")
        md = st.text_area('Type in your markdown string (without outer quotes)')
        with st.container():
            st.divider()
            st.subheader("Rendered Markdown")
            st.markdown(md)
Enter fullscreen mode Exit fullscreen mode

💡 Containers (st.container()) can be inserted into an app to provide structure for elements you choose to place inside.

Tabs within the container are created using the st.tabs() call. This method accepts a list of tab names as an argument and outputs separate tab objects.

The st.dataframe element accepts the df object as an argument along with a column_config. This configuration dictates how the data frame is displayed on the page.

Strealit Basics Webpage:
Image description

Display data using a data frame:
Image description

A simple chatbot that allows user to enter prompt:
Image description

This Streamlit Basics webpage displays simple data and text information, as well as a chatbot function.

Further use cases could be extended to store data portfolios with an embedded chat features that allows users to ask questions about the data.

4.2 Build an interactive file upload webpage

Refer to below Python code to create an interactive webpage for user to upload file to S3 bucket:

import os
import boto3
import streamlit as st
from streamlit_pdf_viewer import pdf_viewer
from io import BytesIO

# Amazon S3 client
s3 = boto3.client('s3')
bucket_name = os.environ['BUCKET_NAME']

st.set_page_config(layout="wide")

# Streamlit columns
upload_s3, read_s3 = st.columns(2)

# Column 1: Upload to Amazon S3 using Boto3
with upload_s3:
    st.subheader("Upload to Amazon S3")
    obj = st.file_uploader(label=f"Uploading to: :green[{bucket_name}]")
    if obj is not None:
        s3.upload_fileobj(obj, bucket_name, obj.name)

# Column 2: Read from Amazon S3 using Boto3
with read_s3:
    st.subheader("Read from Amazon S3")
    response = s3.list_objects_v2(Bucket=bucket_name)
    object_list = []

    if 'Contents' in response:
        for obj in response['Contents']:
            if not obj['Key'].endswith('/'):
                object_list.append(obj['Key'])
    else:
        st.write(f"S3 bucket is empty")

    selected_obj = st.selectbox(f"Selecting from: :green[{bucket_name}]", object_list, index=None)
    st.caption(f"You selected: :blue[{selected_obj}]")

st.divider()

# Displaying the selected Amazon S3 object
if selected_obj is None:
    st.caption("Please select an object from S3 bucket")
else:
    response = s3.get_object(Bucket=bucket_name, Key=selected_obj)
    body = response['Body'].read()

    # Displaying the object based on the file type
    if selected_obj.endswith(".png") or selected_obj.endswith(".jpg"):
        st.image(BytesIO(body))
    elif selected_obj.endswith(".pdf"):
        pdf_viewer(body)
    else:
        st.write(body.decode('utf-8'))
Enter fullscreen mode Exit fullscreen mode

💡 The boto3 library will create an s3 client that will interact with the S3 service. The custom streamlit_pdf_viewer component and the BytesIO module will aid in rendering selected S3 objects to the page.

Let’s test it with a file upload. In the left column (refer to column 1 in above code), we upload a PitchBook PDF document.

Image description

Then we go to S3 to verify if the PDF has been uploaded successfully.

Image description

4.3 Build a generative AI image generator

Refer to below Python code to create Amazon Bedrock Titan Image Generator UI:

import boto3
import json
import base64
import streamlit as st
from io import BytesIO

# Amazon Bedrock client
bedrock = boto3.client('bedrock-runtime')
bedrock_model_id = "amazon.titan-image-generator-v1"

# Convert image data to BytesIO object
def decode_image(image_data):
    image_bytes = base64.b64decode(image_data)
    return BytesIO(image_bytes)

# Invoke Bedrock image model to generate image
def generate_image(prompt):
    body = json.dumps(
        {
            "taskType": "TEXT_IMAGE",
            "textToImageParams": {
                "text":prompt
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "quality": "standard",
                "height": 768,
                "width": 768,
                "cfgScale": 8.0,
                "seed": 100             
            }
        }
    )

    response = bedrock.invoke_model(
                modelId="bedrock_model_id",
                accept="application/json", 
                contentType="application/json",
                body=body
            )

    response_body = json.loads(response["body"].read())
    image_data = response_body["images"][0]

    return decode_image(image_data)

# Streamlit UI

with st.container():
    st.header("Amazon Bedrock Titan Image Generator", anchor=False, divider="rainbow")

    input_column, result_column = st.columns(2)

    # Text Input
    with input_column:
        st.subheader("Describe an image", anchor=False)
        prompt_text = st.text_input("Example: Two dogs sharing a bowl of spaghetti", key="prompt")

        # Generate and Clear buttons

        # Clear field function accessing session state
        def clear_field(prompt):
            st.session_state.prompt = prompt

        generate, clear = st.columns(2, gap="small")

        with generate:
            generate_button = st.button("Generate", use_container_width=True)
        # Clear field callback 
        with clear:
            st.button('Clear', on_click=clear_field, args=[''], use_container_width=True)

    # Resulting image column
    with result_column:
        st.subheader("Generated image", anchor=False)
        st.caption('Your image will appear here.')
        if generate_button:
            # Displays spinner + message while executing the generate_image function
            with st.spinner("Generating image..."):
                image = generate_image(prompt_text)
            st.image(image, use_column_width=True)
Enter fullscreen mode Exit fullscreen mode

💡 The generate_image function passes the imageGenerationConfigtaskType, and user prompt to the Bedrock invoke_model method. The method will return a JSON body that is parsed to retrieve the generated image.

The generated image is decoded using the decode_image function and returned.

The clear_field function interacts with the Streamlit session state. It accepts a prompt and updates the value of the st.session_state.prompt. Session state is used to store and persist state that can be manipulated with the use of callback functions. clear_field is a callback function that will get invoked when a user clicks the Clear button.

The result_column checks if the generate_button value is true and calls the st.spinner element to display a temporary message as the generate_image function works in the background.

The resulting image is then passed to a st.image element to render it on the page.

Let’s try the image generator to create an image with entered prompt:

Image description

5. Conclusion

In this article, we have

  • Introduce Streamlit python library to create data app and interactive app with simple minimal codes
  • Deployed the Streamlit application to an Amazon EC2 instance.
  • Interacted with each application webpage with its interactive features
  • Go through the application codes that integrate Amazon Bedrock and Amazon S3

Reference and further readings:

  1. What is streamlit. https://github.com/streamlit/streamlit
  2. What is Amazon Bedrock? https://aws.amazon.com/bedrock/
  3. Amazon Bedrock Developer Experience. https://aws.amazon.com/bedrock/developer-experience/
  4. Quickly build Generative AI applications with Amazon Bedrock https://community.aws/content/2ddby9SeCKALvSz0CWUtx4Q4fPX/amazon-bedrock-quick-start?lang=en

Top comments (1)

Collapse
 
andrealiao profile image
Andrea Liao

Well done ! Looking forward to more articles on writing code with Bedrock models and fine-tuning best practices! @abdullahparacha