Generative AI - Has Generative AI captured your imagination to the extent it has for me?
Generative AI is indeed fascinating! The advancements in foundation models have opened up incredible possibilities. Who would have imagined that technology would evolve to the point where you can generate content summaries from transcripts, have chatbots that can answer questions on any subject without requiring any coding on your part, or even create custom images based solely on your imagination by simply providing a prompt to a Generative AI service and foundation model? It's truly remarkable to witness the power and potential of Generative AI unfold.
In this article, I am going to show you how to build a serverless GenAI solution to create call center transcript summary via a Rest API, Lambda and AWS Bedrock Converse API. I have posted an article with detail steps on creating a GenAI solution for summarizing the call center transcripts by passing a transcript file that contains the conversation between a call center support staff and a customer.
However, in this example, I will further extend it by adding the function to protect the customer PII information so that this information is abstracted from the response being returned by Amazon Bedrock Converse API to the end consumer of the API.
Examples of PII (Personal Identifiable Information) - SSN, Account Number, Phone, Email, Address etc.
Amazon Bedrock is fully managed service that offers choice of many foundation models like Anthropic Claude, AI21 Jurassic-2 , Stability AI, Amazon Titan and others.
I will use recently released Anthropic Haiku foundation model and will invoke it via Amazon Bedrock Converse API.
Let's revisit what Amazon Converse API is for and why it is needed?
You might be wondering why there's a need for another API when Bedrock already supports invoking models for large language models (LLMs). The challenge that the Converse API aims to address is the varying parameters required to invoke different LLMs. It offers a consistent API that can call underlying Amazon Bedrock foundation models without requiring changes in your code. For example, your code can call Anthropic Haiku, Anthropic Sonnet, or Amazon Titan just by changing the model ID without needing any other modifications!
While the API specification provides a standardized set of inference parameters, it also allows for the inclusion of unique parameters when needed.
As of May 2024, the newly introduced Converse API does not support embedding or image generation models.
What is Amazon Converse API guardrail?
Generative AI guardrails, provided by various generative AI solutions, are one of my favorite functional areas to read about, validate, and prototype! These guardrails transform a generative AI solution into a Responsible AI solution by ensuring that responses stay within predefined boundaries. Examples include abstracting Personally Identifiable Information (PII) or preventing an API or chatbot from providing information from restricted areas, such as account details, investment advice, or any other sensitive data defined by the organization.
Amazon Bedrock initially supported guardrails for invoking foundational models. As of June 2024, similar support has been extended to the Amazon Bedrock Converse API.
I will implement these guardrails in a previously completed project involving the transcript summary for a call center. In this use case, the Generative AI response to the end consumer will abstract any Personally Identifiable Information.
Guardrail Policies
The Amazon Bedrock Guardrail feature allows you to configure various filters, providing responsible boundaries for the responses generated by your AI solution. These guardrails help ensure that the outputs are appropriate and align with your requirements and standards.
Content Filters
Content Filters across 6 categories
Hate
Insults
Sexual
Violence
Misconduct
Prompt Attack
Filters can be set to None, Low, Medium, High.
Denied Topics
You can specify filter for the topic that API should not respond to!
Word Filter
You can specify words that you want filter to act on before providing a response!
Sensitive Information Filter
Filter to either block or mask the Personal Identifiable Information.
Amazon Bedrock allow provides a way to configure the message provided back to the user if input or the response in violation with the guardrail configured policies. For example, if Sensitive information filter configured to block a request with account number, then, you can provide a customize response letting user know that request cannot be processed since it contains a forbidden data element.
Let's review our use cases:
- There is a transcript available for a case resolution and conversation between customer and support/call center team member.
- A call summary needs to be created based on this resolution/conversation transcript.
- An automated solution is required to create call summary.
- An automated solution will provide a repeatable way to create these call summary notes.
- Increase in productivity as team members usually work on documenting these notes can focus on other tasks.
- Guardrail should be configured so that PII information is not displayed in the response.
I am generating my lambda function using AWS SAM, however similar can be created using AWS Console. I like to use AWS SAM wherever possible as it provides me the flexibility to test the function without first deploying to AWS cloud.
Here is the architecture diagram for our use case.
Create a SAM template
I will create a SAM template for the lambda function that will contain the code to invoke Bedrock Converse API along with required parameters and a prompt. Lambda function can be created without the SAM template however, I prefer to use Infra as Code approach since that allow for easy recreation of cloud resources. Here is the SAM template for the lambda function.
Create a Lambda Function
The Lambda function serves as the core of this automated solution. It contains the code necessary to fulfill the business requirement of creating a summary of the call center transcript using the Amazon Bedrock Converse API. This Lambda function accepts a prompt, which is then forwarded to the Bedrock Converse API to generate a response using the Anthropic Haiku foundation model. Now, Let's look at the code behind it.
Build function locally using AWS SAM
Next build and validate function using AWS SAM before deploying the lambda function in AWS cloud. Few SAM commands used are:
SAM Build
SAM local invoke
SAM deploy
Bedrock Invoke Model Vs. Bedrock Converse API
Bedrock InvokeModel
Bedrock Converse API
Validate the GenAI Model response using a prompt
Prompt engineering is an essential component of any Generative AI solution. It is both art and science, as crafting an effective prompt is crucial for obtaining the desired response from the foundation model. Often, it requires multiple attempts and adjustments to the prompt to achieve the desired outcome from the Generative AI model.
Given that I'm deploying the solution to AWS API Gateway, I'll have an API endpoint post-deployment. I plan to utilize Postman for passing the prompt in the request and reviewing the response. Additionally, I can opt to post the response to an AWS S3 bucket for later review.
I am using Postman to pass transcript file for the prompt.
This transcript file has a conversation between call center employee (John) and customer (Girish) about a request to reset the password due to the locked account.
John: Hello, thank you for calling technical support. My name is John and I will be your technical support representative. Can I have your account number, please?
Girish: Yes, my account number is 213-456-8790.
John: Thank you. I see that you have locked your account due to multiple failed attempts to enter your password. To reset your password, I will need to ask you a few security questions. Can you please provide me with the answers to your security questions?
Girish: Sure, my security questions are: What is your favorite color? and What is your favorite food?
John: Please can you provide your zip code?
Girish: Yes, my zip code is 43215.
John: one final question, Please confirm your email address.
Girish: my email is gbtest@gmailtest.com.
John: Great, thank you. I will now reset your password and send you an email with instructions on how to log in to your account. Please check your email in a few minutes.
Girish: Thank you so much for your help.
John: You're welcome. Is there anything else I can assist you with today?
Girish: No, that's all for now. Thank you again for your help.
John: You're welcome. Have a great day!
Review the response returned by Generative AI Foundation Model
As you can note in the response above, GenAI responses does not include the PII information.
Let's look at the response once guardrail policy is updated to block the PII data.
Response with blocked data
Above is the response when policy is updated to block if PII contains account number.
With these steps, a serverless GenAI solution to create call center transcript summary via a Rest API, Lambda and AWS Bedrock Converse API has been successfully completed. Python/Boto3 were used to invoke the Bedrock API with Anthropic Haiku.
As was demonstrated, with Converse API, guardrail was used to implement a policy to control the GenAI response and abstract and block the PII data!
A guardrail was created to remove the PII information from the response. Also guardrail config was updated to validate that account number when configured for blocking, will be blocked.
Thanks for reading!
Click here to get to YouTube video for this solution.
https://www.youtube.com/watch?v=NVsX30uo_x4
π’πΎππΎππ½ β¬π½πΆππΎπΆ
πππ ππ¦π³π΅πͺπ§πͺπ¦π₯ ππ°ππΆπ΅πͺπ°π― ππ³π€π©πͺπ΅π¦π€π΅ & ππ¦π·π¦ππ°π±π¦π³ ππ΄π΄π°π€πͺπ’π΅π¦
πππ°πΆπ₯ ππ¦π€π©π―π°ππ°π¨πΊ ππ―π΅π©πΆπ΄πͺπ’π΄π΅
Top comments (0)