DEV Community

Cover image for Building an AI-Powered iOS Chat App with Amazon Bedrock and Swift

Building an AI-Powered iOS Chat App with Amazon Bedrock and Swift

This post will teach you to create a native iOS app using Amazon Bedrock to build AI-powered chat and image features in Swift. We’ll leverage Serverless resources, including AWS Lambda and API Gateway, for the backend.

The sample application includes the following:

  1. A mobile application using Swift
  2. An integration to Amazon Bedrock using the models amazon.titan-image-generator-v1 and ai21.j2-mid-v1
  3. Serverless backend processing using AWS Lambda with TypeScript
  4. Implementation of RESTful APIs using Amazon API Gateway for communication
  5. Use Amazon CloudWatch Logs to monitor AWS Lambda functions and view their logs

The final result will be the following app:

Image description

Prerequisites

Before you get started, make sure you have the following installed:

  • An AWS account
  • Node.js v18 or later
  • Serverless Framework, AWS SAM or AWS CDK (depending on if you want to use Infrastructure as Code. I'll be using Serverless Framework)
  • Package manager, I'll be using yarn
  • Xcode version 15 or later

Architecture

Image description

  1. Users access the application from their mobile devices and the app sends a request to Amazon API Gateway
  2. API Gateway routes the request to either the ImageFunction or TextFunction Lambda
  3. AWS Lambda communicates with an Amazon Bedrock model and retrieves the generated response in JSON format
  4. The processed response is sent back to the app for display, enabling content or chat interaction

Accessing Amazon Bedrock begins with the following steps

  1. Login to your AWS Console
  2. Go to Amazon Bedrock
  3. In the left navigation click on Model access
  4. Request access – please note that the body may vary depending on the model you select. For the image I will be using the Titan Image Generator G1 model and for the text I will be using the ai21.j2-mid-v1 model

Serverless backend processing using AWS Lambda with TypeScript

You can choose your preferred tool for deploying Lambda functions, but I’ll provide the code necessary to create them:

Text Lambda

Please take note of a few important points below:

  1. You need to import the client-bedrock-runtime package
  2. You need to add the modelId
  3. The prompt is the search text provided from your API
import { BedrockRuntimeClient, InvokeModelCommand } from '@aws-sdk/client-bedrock-runtime';

const client = new BedrockRuntimeClient({ region: 'us-east-1' });

export async function handler(event: any) {
  const prompt = JSON.parse(event.body).prompt;
  const input = {
    modelId: 'ai21.j2-mid-v1',
    contentType: 'application/json',
    accept: '*/*',
    headers: {
      'Access-Control-Allow-Headers': 'Content-Type',
      'Access-Control-Allow-Origin': '*',
      'Access-Control-Allow-Credentials': true,
      'Access-Control-Allow-Methods': 'POST'
    },
    body: JSON.stringify({
      prompt: prompt,
      maxTokens: 200,
      temperature: 0.7,
      topP: 1,
      stopSequences: [],
      countPenalty: { scale: 0 },
      presencePenalty: { scale: 0 },
      frequencyPenalty: { scale: 0 }
    })
  };

  try {
    const data = await client.send(new InvokeModelCommand(input));
    const jsonString = Buffer.from(data.body).toString('utf8');
    const parsedData = JSON.parse(jsonString);
    const text = parsedData.completions[0].data.text;
    return text;
  } catch (error) {
    console.error(error);
  }
}
Enter fullscreen mode Exit fullscreen mode

Image Lambda

import { BedrockRuntimeClient, InvokeModelCommand } from '@aws-sdk/client-bedrock-runtime';

const client = new BedrockRuntimeClient({ region: 'us-east-1' });

export async function handler(event: any) {
  const prompt = JSON.parse(event.body).prompt;
  const input = {
    modelId: 'amazon.titan-image-generator-v1',
    contentType: 'application/json',
    accept: 'application/json',
    headers: {
      'Access-Control-Allow-Headers': 'Content-Type',
      'Access-Control-Allow-Origin': '*',
      'Access-Control-Allow-Credentials': true,
      'Access-Control-Allow-Methods': 'POST'
    },
    body: JSON.stringify({
      textToImageParams: {
        text: prompt
      },
      taskType: 'TEXT_IMAGE',
      imageGenerationConfig: {
        cfgScale: 10,
        seed: 0,
        width: 512,
        height: 512,
        numberOfImages: 1
      }
    })
  };

  try {
    const command = new InvokeModelCommand(input);
    const response = await client.send(command);

    const blobAdapter = response.body;

    const textDecoder = new TextDecoder('utf-8');
    const jsonString = textDecoder.decode(blobAdapter.buffer);

    try {
      const parsedData = JSON.parse(jsonString);
      return parsedData.images[0];
    } catch (error) {
      console.error('Error parsing JSON:', error);
      return 'TextError';
    }
  } catch (error) {
    console.error(error);
  }
}
Enter fullscreen mode Exit fullscreen mode

Now deploy your Lambdas, if you're using Serverless Framework you can use the following configuration:

service: aws-bedrock-ts
frameworkVersion: '3'

provider:
  name: aws
  runtime: nodejs18.x
  timeout: 30
  iam:
    role:
      statements:
        - Effect: 'Allow'
          Action:
            - 'bedrock:InvokeModel'
          Resource: '*'

functions:
  bedrockText:
    handler: src/bedrock/text.handler
    name: 'aws-bedrock-text'
    events:
      - httpApi:
          path: /bedrock/text
          method: post
  bedrockImage:
    handler: src/bedrock/image.handler
    name: 'aws-bedrock-image'
    events:
      - httpApi:
          path: /bedrock/image
          method: post
Enter fullscreen mode Exit fullscreen mode

You will now be granted the API endpoints to your Lambdas, save those.

Developing your iOS app

Setting Up the Project

  1. Start by creating a new Swift project in Xcode.
  2. Name the project according to your app (e.g., BedrockSwift).

ChatMessage Model

First, define a model to store your chat messages, which could either be text or images. I'll name my class ChatMessage.swift:

import UIKit

struct ChatMessage: Equatable {
    var text: String?
    var image: UIImage?
    var isImage: Bool
    var isUser: Bool
}
Enter fullscreen mode Exit fullscreen mode

Service for Handling API Requests

This service is responsible for managing API interactions, including sending prompts to your Lambda functions and processing the responses. Make sure to update your endpoints to your Lambdas. I'll name my class APIService.swift:

import UIKit

class APIService: ObservableObject {

    @Published var messages: [ChatMessage] = []

    private func getEndpointURL(for type: String) -> URL? {
        let baseURL = "https://example.execute-api.region.amazonaws.com/bedrock"

        switch type {
        case "text":
            return URL(string: "\(baseURL)/text")
        case "image":
            return URL(string: "\(baseURL)/image")
        default:
            return nil
        }
    }

    func addUserPrompt(_ prompt: String) {
        messages.append(ChatMessage(text: prompt, image: nil, isImage: false, isUser: true))
    }

    func sendRequest(prompt: String, type: String, completion: @escaping () -> Void) {
        guard let url = getEndpointURL(for: type) else {
            print("Invalid URL for type: \(type)")
            return
        }

        var request = URLRequest(url: url)
        request.httpMethod = "POST"
        let parameters: [String: Any] = ["prompt": prompt]

        request.httpBody = try? JSONSerialization.data(withJSONObject: parameters)
        request.addValue("application/json", forHTTPHeaderField: "Content-Type")

        URLSession.shared.dataTask(with: request) { data, response, error in
            if let error = error {
                print("Error: \(error)")
                return
            }

            guard let data = data else { return }
            print(data)

            DispatchQueue.main.async {
                if type == "text" {
                    if let responseString = String(data: data, encoding: .utf8) {
                        let trimmedResponse = responseString.trimmingCharacters(in: .whitespacesAndNewlines)
                        self.messages.append(ChatMessage(text: trimmedResponse, image: nil, isImage: false, isUser: false))
                    }
                } else {
                    DispatchQueue.main.async {
                        if let base64String = String(data: data, encoding: .utf8),
                           let imageData = Data(base64Encoded: base64String, options: .ignoreUnknownCharacters),
                           let image = UIImage(data: imageData) {

                            DispatchQueue.main.async {
                                self.messages.append(ChatMessage(text: nil, image: image, isImage: true, isUser: false))
                            }
                        }
                    }
                }
                completion()
            }
        }.resume()
    }
}
Enter fullscreen mode Exit fullscreen mode

View for Chat Interface

Now, create the main view that will handle the UI and display the chat messages. I'll name my file BedrockView.swift:

import SwiftUI

struct BedrockView: View {

    @StateObject var apiService = APIService()
    @State private var prompt: String = ""
    @State private var selectedType = 0
    @State private var isLoading = false

    var body: some View {
        VStack {
            ScrollViewReader { scrollViewProxy in
                ScrollView {
                    VStack {
                        ForEach(apiService.messages.indices, id: \.self) { index in
                            if apiService.messages[index].isImage, let image = apiService.messages[index].image {
                                HStack {
                                    Spacer()
                                    Image(uiImage: image)
                                        .resizable()
                                        .scaledToFit()
                                        .frame(height: 200)
                                        .frame(maxWidth: .infinity, alignment: .leading)
                                        .cornerRadius(10)
                                        .padding(.vertical, 5)
                                }
                            } else if let text = apiService.messages[index].text {
                                HStack {
                                    if apiService.messages[index].isUser {
                                        Spacer()
                                        Text(text)
                                            .padding(.vertical, 6)
                                            .padding(.horizontal, 12)
                                            .background(Color.blue.opacity(0.2))
                                            .cornerRadius(10)
                                            .frame(maxWidth: .infinity, alignment: .trailing)
                                    } else {
                                        Text(text)
                                            .padding(.vertical, 6)
                                            .padding(.horizontal, 12)
                                            .background(Color.gray.opacity(0.2))
                                            .cornerRadius(10)
                                            .frame(maxWidth: .infinity, alignment: .leading)
                                    }
                                }
                                .padding(.vertical, 1)
                            }
                        }
                        if isLoading {
                            ProgressView()
                                .padding(.vertical, 20)
                        }
                    }
                    .padding(.horizontal)
                    .id("BOTTOM")
                }
                .onChange(of: apiService.messages) { _ in
                    withAnimation {
                        scrollViewProxy.scrollTo("BOTTOM", anchor: .bottom)
                    }
                }
            }

            VStack {
                TextField("Enter prompt...", text: $prompt)
                    .textFieldStyle(.roundedBorder)
                    .padding(.horizontal)
                    .padding(.vertical, 10)

                HStack {
                    Picker(selection: $selectedType, label: Text("Type")) {
                        Text("Text").tag(0)
                        Text("Image").tag(1)
                    }
                    .pickerStyle(SegmentedPickerStyle())
                    .frame(maxWidth: .infinity)
                    .padding(.leading, 10)

                    Button(action: {
                        if prompt.isEmpty { return }
                        apiService.addUserPrompt(prompt)
                        let type = selectedType == 0 ? "text" : "image"
                        isLoading = true
                        apiService.sendRequest(prompt: prompt, type: type) {
                            isLoading = false
                        }
                        prompt = ""
                    }) {
                        Text("Send")
                            .frame(width: 100, height: 2)
                            .padding()
                            .background(Color.primary)
                            .foregroundColor(.white)
                            .cornerRadius(10)
                    }
                }
                .padding(.horizontal)
            }
            .padding()
        }
    }
}

struct ContentView_Previews: PreviewProvider {
    static var previews: some View {
        BedrockView()
    }
}
Enter fullscreen mode Exit fullscreen mode

App Entry Point

Go to your file: YourAppNameApp.swift and update the default entry point created when you set up your SwiftUI project. Mine is called BedrockView as you saw above.

import SwiftUI

@main
struct BedrockSwiftApp: App {
    var body: some Scene {
        WindowGroup {
            BedrockView()
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Running the App in Xcode

Now you're ready to run your app! Follow these steps to launch it in Xcode:

  1. Select a Device: Choose a simulator or connected device from the toolbar.
  2. Build and Run: Click the "Run" button (or press Cmd + R) to build and run the app.

This will launch the app on your selected device, allowing you to interact with Amazon Bedrock's chat and image generation features.

Note a couple of things

As this is a local app for testing, I've set the Access-Control-Allow-Origin to `*. Additionally, you may need to adjust the CORS settings in API Gateway..

Note that API calls may incur a small cost. For detailed pricing information, please refer to the Amazon Bedrock pricing model.

GitHub Repositories

The source code for this project is available on GitHub:

Conclusion

In this post, I’ve walked you through building a simple AI-chat application for iOS, using native Swift alongside serverless AWS services. By integrating Amazon Bedrock's generative AI models with services like AWS Lambda and API Gateway, we’ve created a streamlined solution that leverages the power of AWS in a native mobile experience. Please note that I’ve aimed to use only native components in the app, though there are certainly areas for improvement. Additionally, securing your API with tokens is essential; I’ll cover this topic in detail in an upcoming post.

Top comments (1)

Collapse
 
programmerraja profile image
Boopathi

This is a great tutorial for anyone interested in building AI-powered chat apps! I'm especially interested in seeing how you integrated Amazon Bedrock into a native Swift iOS app. Can't wait to try this out!