DEV Community

Cover image for Using the Gemini API on Cloud Run to Build a Chat Application
Chloe Condon πŸŽ€
Chloe Condon πŸŽ€

Posted on

25 8 4 9 7

Using the Gemini API on Cloud Run to Build a Chat Application

(🎨 cover image created with Imagen 3 in Gemini!)

Welcome to my blog series on building with Google AI tools! In this post, we'll create a simple chat application powered by Gemini and hosted on Cloud Run. If you're experimenting with LLMs or are looking to integrate AI into your web apps- then you're in the right place. So, let's starting learning!

πŸ‘©β€πŸ« What are we building?

We'll build a web-based chat interface that connects to the Gemini API and returns conversational responses. Our app will run on CLoud Run, and we'll use Cloud Build and Artifact Registry to containerize and deploy it.

By the end of this tutorial, you'll:

  • Set up a Python web app that talks to the gemini API
  • Containerize your app using Docker
  • Deploy the app to Cloud Run using Google Cloud tools
  • Start thinking about how to integrate LLMs into your own projects

So, think less HAL, more SmarterChild. πŸ€–πŸ’¬ Let's dive in!

πŸ“ Prerequisites

To get started, you'll need to make sure you have:

  • A Google Cloud project
  • The gcloud CLI installed and authenticated
  • Docker installed
  • Vertex AI API enabled on your project
  • πŸ’» Optional: Use Cloud Shell for a fully configured environment

βš™οΈ Step 1: Clone the Chat App Template

To get started, let's pull down a prebuilt Python Flask app:

git clone https://github.com/ChloeCodesThings/chat-app-demo
cd chat-app-demo

Enter fullscreen mode Exit fullscreen mode

You'll see our app has:

  • app.py - Flask routes and Gemini API logic
  • index.html - A basic chat UI
  • Dockerfile - Instructions for how to build container

πŸ³πŸ“ Step 2: Build the Docker Image with Cloud Build

First, you'll need to set some environment variables:

export PROJECT_ID=$(gcloud config get-value project)
export REGION=us-central1
export AR_REPO=chat-app-repo
export SERVICE_NAME=chat-gemini-app
Enter fullscreen mode Exit fullscreen mode

Create the Artifact Registry repo:

gcloud artifacts repositories create $AR_REPO \
--repository-format=docker \
--location=$REGION
Enter fullscreen mode Exit fullscreen mode

Then build and push the image:

gcloud builds submit --tag $REGION-docker.pkg.dev/$PROJECT_ID/$AR_REPO/$SERVICE_NAME
Enter fullscreen mode Exit fullscreen mode

πŸš€ Step 3: Deploy to Cloud Run

Now deploy the app:

gcloud run deploy $SERVICE_NAME \
  --image=$REGION-docker.pkg.dev/$PROJECT_ID/$AR_REPO/$SERVICE_NAME \
  --platform=managed \
  --region=$REGION \
  --allow-unauthenticated \
  --set-env-vars=GCP_PROJECT=$PROJECT_ID,GCP_REGION=$REGION
Enter fullscreen mode Exit fullscreen mode

You’ll get a URL like https://chat-gemini-app-xxxxxx.run.app. Open it to chat with Gemini!

πŸ”Žβœ¨ Step 4: How the Gemini Integration Works

Let’s peek at the backend logic. In app.py, this is where the magic happens:

from vertexai.preview.language_models import ChatModel

def create_session():
    chat_model = ChatModel.from_pretrained("gemini-2.0-flash")
    chat = chat_model.start_chat()
    return chat

def response(chat, message):
    result = chat.send_message(message)
    return result.text
Enter fullscreen mode Exit fullscreen mode

So, each time the user submits a message, our app will:

  • Start a new chat session

  • Send the message to the Gemini model

  • Return the response as JSON

πŸ€Έβ€β™€οΈ Ok, let's try it out!

Enter a question like:

"What is Google Cloud Platform?"

…and you’ll get a contextual, LLM-generated response.

🎯 What’s Next?

Yay- we did it! πŸ₯³ This is just the beginning! In future posts, we’ll cover:

  • Enhancing the chat UI

  • Keeping sessions persistent

  • Using system prompts to shape Gemini’s responses

  • Securing your endpoints with Firebase Auth or API Gateway

🎊 Give yourself a high-five!
In this post, you learned how to:

  • Build and deploy a simple Flask chat app using Gemini

  • Containerize it with Docker and push to Artifact Registry

  • Deploy to Cloud Run with minimal config

Want to learn more? Here's a tutorial you can check out

Until next time, folks! -Chloe

Image of Datadog

The Future of AI, LLMs, and Observability on Google Cloud

Datadog sat down with Google’s Director of AI to discuss the current and future states of AI, ML, and LLMs on Google Cloud. Discover 7 key insights for technical leaders, covering everything from upskilling teams to observability best practices

Learn More

Top comments (3)

Collapse
 
dev_in_the_house profile image
Devin β€’

Thanks for this

Collapse
 
tjr79 profile image
Tjr79 79 β€’

Thanks for this

Collapse
 
poovarasan profile image
Poovarasan β€’

Thank of lots for giving this information πŸ™‚

πŸ‘‹ Kindness is contagious

If this article connected with you, consider tapping ❀️ or leaving a brief comment to share your thoughts!

Okay