Hey, fellows, I am preparing a short talk about generating a chatbot, so I want to contribute to this topic here.
I faced the problem that my 12-year-old child navigated every time to chat GPT and asked for some information. To be clear, that's fine for me, but my thought about this is that he did not learn to challenge the generated answers because not every answer is correct.
So, I decided to generate a custom chatbot to point him to suitable search topics to determine the answers. How did I realize this? The answer is "prompt engineering."
Introduction to Prompt Engineering
So, what is prompt engineering?
Prompt engineering is an essential skill in the field of artificial intelligence (AI). It involves crafting specific input prompts to guide AI models, like those used in Microsoft's M365 Copilot and Azure AI, to generate accurate and relevant responses. By understanding and utilizing prompt engineering techniques, users can significantly enhance the performance and effectiveness of AI tools in various applications.
So, with prompt engineering, you can control the chatbot's behavior.
The Basics of Prompt Engineering
At its core, prompt engineering is about communication. It’s designing and refining the prompts (the input given to the AI) to achieve the desired output. This can involve specifying the prompt's format, structure, and content to ensure the AI understands and responds appropriately.
Crafting Effective Prompts in Azure AI
To generate a prompt you can use these strategies to create effective prompts:
- Be Specific: Provide clear and concise instructions. For example, instead of asking, "What happened in the last meeting?" you could ask, "Summarize the key decisions made in the last project management meeting."
- Use Contextual Information: Include relevant context to guide the AI. For example, "Based on the Q2 financial report, summarize the main factors contributing to the increase in revenue."
- Define the Output Format: Specify the desired format of the output. For example, "List the key points from the meeting in bullet points."
Examples of Prompt Engineering in Action
In an open AI studio, you have complete control of each configuration. When you open up the "Chat-Designer" you will see the following
Now, it's time to change this prompt. Let's use an example where the system must answer the question in verses. After I adjusted this, I asked the same question again, and the output now will change. .
Now you see, you have control over changing the behavior of the answer generation. You can use everything as an answer, not usable but funny answers. Like answers in boolean representation:
You see, all of this is possible.
Generate the prompt for my child
So, back to my case, I wanted to generate a prompt that would not simply answer the result; it must point to some external resource. This requirement I must tell him that it must not answer the question itself. So, my system message will be
"You are an AI assistant that helps people find information. You won't give the answer directly."
The result
What? But there is the answer. Yes right. Because I don't disallow to give the answer. So, I modify the system message
"You are an AI assistant who will not give the answer; instead, you will advise where to search and how to search for the answer."
The result:
Way better, it will now give a glimpse of the answer, but not too much. So it will tell where to find more information about this. But let's extend the answer to helping to find the keywords for search machines. So, I will extend this command to the system message
"You are an AI assistant the will not give the answer, instead you will give advise where to search and how to search for the answer. Also advice with search keywords for search machines."
The result
Now it's fantastic. But hey, the audience looks like a generic human. So, let's modify some of the context of the questionnaire and bring in some personality. So, I adjusted the system context to this
"You are an older brother to a 12-year-old boy called James. Use greetings like "Hey boy" or "Hey pal" to be creative and thank you for using your help. So, answer him with elementary words and use paragraphs to separate subtopics. Also, try to generate some images. Describe some details but not too much, only so far that he will have gained interest in this topic. you will not give the answer. Instead, you will advise where to search and how to find the answer. Also, advice with search keywords for search machines."
The result
So, by giving some context about the user, it will have more personality, and you can adjust the answer generation. But hey, you know a wall of text is not very friendly to a 12-year-old boy; it is boring. So let's insert a call to action to the answers, like links. I will adjust the system context to insert some links to this.
"You are an older brother to a 12-year-old boy called James. Use greetings like "Hey boy" or "Hey pal" to be creative and thank you for using your help. So, answer him with elementary words and use paragraphs to separate subtopics. Also, try to generate some images. Describe some details but not too much, only so far that he will have gained interest in this topic. you will not give the answer. Instead, you will advise where to search and how to find the answer. Also, I advise you to use search keywords for search machines. when you create some advice for searching on the web, please generate some links for this."
This will be the final result:
Integrating this into a web application
So, yes, I can use the AI designer to serve the chatbot, but my son won't go into this and ask for a date. Instead, I will deploy this as a web application.
This is very easy because I can create a webapplication right out of this AI tool:
In the following menu, you can make a new web application. In my case, it will be in this setting. You will notice that you can use a free tier, but pay attention; when you use the chat history checkbox, you will be charged anyway because the history will be stored in a cosmosDB instance. These are not free for this. So, my setting will look like this
After clicking on provision, these tasks will be started in the background, so you can grab some coffee or tea. By the way, you can send me some ko.fi t support me.
The Webapplication
So when you provision the web application into Azure, you will get a GPT-like chat interface. Basically, it looks like this:
The base project was hosted by this GitHub project.
However,. the most critical thing is that you bind your particular context to the app, which was defined before. So when we ask the same question within the chat prompt you will receive the following.
It's like the answer within the playground. Now it's ready to use, and we built our own Chatprompt. I can host it for my son, and he can use it for his research.
Yes, I know, he will not use it regularly, but hey. I am an IT guy, so the DNS server internally is in my own hands, if you know what I mean 😏.
Conclusion
So, in this post, I offered you a very, very, veeeery simple scenario to create your own prompt that you can host within Azure. So, I used this to generate a prompt that will enable my son to get better research skills on the internet.
Yes, he wants to go to ChatGPT directly, but hey, I am the master of the local DNS server 😄.
So try it out to build your own custom prompt. It's straightforward for now.
Top comments (0)