Hello Tech enthusiasts, here is my another blog on Building a Responsible AI Career Counselor Copilot with Azure Content Safety Service
In today’s rapidly evolving digital landscape, the integration of artificial intelligence (AI) into various sectors is transforming the way we live and work. One of the most promising applications of AI is in career counseling, where intelligent systems can provide personalized guidance and support to individuals navigating their professional journeys. However, with great power comes great responsibility. Ensuring that AI-driven career counseling tools are ethical, unbiased, and safe is paramount.
By leveraging the Azure Content Safety Service, we can build a Responsible AI Career Counselor Copilot that not only offers valuable career advice but also adheres to the highest standards of content safety and ethical AI practices. This blog explores how Azure’s robust content safety features can be employed to create an AI career counselor that is both effective and responsible, ensuring a positive and secure user experience.
Following are the objectives of this blog:
Objectives:-
Tasks
- Create an Azure Content Safety resource.
- Create Copilot by using Microsoft Copilot Studio.
- Enable Generative AI
- Create Topics
- Test and publish in a Demo website.
Prerequisites:-
- Access to Microsoft Azure.
- Access to Microsoft Copilot Studio.
- Basic understanding of Microsoft Power Platform
- Experience in administering solutions in Microsoft Azure is preferred.
Here’s a step-by-step guide to help you build a responsible ai career counselor copilot.
Task 1: Create an Azure Content Safety resource.
In this task, you create an azure content safety resource.
- Head over to Microsoft Azure portal.
In the Azure global search, look for Azure content safety service.
Then select the Content safety service from the list.Select + Create to create the content safety resource.
- Enter the following details and select Review + create.
- Once created the resource, select Go to resource and expand Resource Management, then select Keys and endpoints. Copy any of the Keys and URL of the resource and paste in a notepad for future use.
Task 2 : Create Copilot by using Microsoft Copilot Studio.
Go to Microsoft Copilot Studio, Select Agents and Select + New agent.
Enter Name, Description, Instructions and Knowledge then select Create.
Task 3: Enable Generative AI
- Select Settings from the top right corner.
- Select Generative AI then select Generative (preview) - Use generative AI to respond with the best combination of actions, topics, and knowledge. and select Save.
Note:- You can enable Generative AI by clicking on the Enable toggle under overview section.
Task 4: Create Topics
- Close the window and select Topics, then select + add topics and select Create from description with Copilot.
- Enter Name your topic and Create a topic to .... then select Create.
- To add a variable to the careertopic, please select Add node then select Variable Management then select Set variable value.
- Under Custom click Create new.
- Select variable properties and enter variable name as varUserQtn. Under Usage please select Topics(limited scope).
- Select on To value and under System tab select Activity.Text.
-
Add the following step to utilize the Azure Content Safety API to validate the user query and check for any dangerous content:
- Select Advanced and then select Send HTTP request.
After adding HTTP request node, please enter the following details:
URL: Enter your copilot url.
Method : Post.Click on Edit under Headers and Body and clikc + Add.
- Enter Key as content-type and Value as application/json then click + Add.
- Set the key as Ocp-Apim-Subscription-Key and Add the value to correspond to the content safety endpoint key that you obtained from Azure.
- SCroll down and Select Json content and Edit formula under body section. Enter the below code to the body tag.
{
Text:Topic.varUserQtn
}
- Select Response data as ** From sample data** and click on Get schema from sample json enter the following Json code:
{
"dangerouscontent": [],
"categories": [
{
"category": "Hate",
"severity": 2
},
{
"category": "SelfHarm",
"severity": 0
},
{
"category": "Sexual",
"severity": 0
},
{
"category": "Violence",
"severity": 0
}
]
}
Then select Confirm.
- Create a new variable called varQtnOutput to store the content safety API output by clicking on Select a variable under Save response as.
Please select Create new variable and provide variable name as varQtnOutput.
Create a variable named varSeverity and add the below formula under formula tab:
First(Filter(Topic.varQtnOutput.categories, category = "Hate")).severity
Then click Insert.
- Create a variable named varSeverityself and add the below formula under formula tab:
First(Filter(Topic.varQtnOutput.categories, category = "SelfHarm")).severity
- Create a variable named varSeveritysexual and add the below formula under formula tab:
First(Filter(Topic.varQtnOutput.categories, category = "Sexual")).severity
- Create another variable named varSeverityviolence and add the below formula:
First(Filter(Topic.varQtnOutput.categories, category = "Violence")).severity
- To find out whether there is a content safety problem of any kind, create a new variable called varSafe and enter the below formula:
If(
Topic.varSeverity = 0 &&
Topic.varSeverityself = 0 &&
Topic.varSeveritysexual = 0 &&
Topic.varSeverityviolence = 0,
"Safe",
"Unsafe"
)
- Add a condition action to check the varSafe variable.
- On condition action select the variable varSafe and the for is equal to enter Safe.
- Add the generative answers node to the positive branch by selecting Generative Answers from the Advanced section.
- Select the variable varUserQtn and click on edit under Data sources, then click on Add Knowledge.
- In the negative flow branch, add a message block.
- Then enter the below expression to check the 4 variables and provide appropriate messages to the user:
If(
Topic.varSeverity > 0,
"There is hate speech in the question. Kindly rephrase your query.",
""
) &
If(
Topic.varSeverityself > 0,
"Self-harm is indicated by the question. Please get help right away or get in touch with an expert.",
""
) &
If(
Topic.varSeveritysexual > 0,
"There is improper sexual content in the query. Kindly reword your query.",
""
) &
If(
Topic.varSeverityviolence > 0,
"There are references to violence in the question. Kindly rephrase your query.",
""
)
As therefore, we have finished configuring the copilot and made sure that appropriate content safety is examined prior to producing contextual responses from career website.
Test and publish in a Demo website.
Go to Settings and under security tab select No Authentication and select Save then close the window.
Select the Channels tab , then select Demo Website and select Save.
Select Publish.
You can click on the DemoWebsite link and test your Copilot agent.
Successfully completed a small example of creating a Copilot by using Microsoft Copilot templates and add Content safety service resource.
Hope you enjoy the session.
Please leave a comment below if you have any further questions.
Happy Sharing !!!
Keep Learning | Spread Knowledge | Stay blessed |
Top comments (0)