In this blog, I want to share some tips and tricks for getting expected responses from LLMs during chats. These tricks are generally referred to as Prompt Engineering. Although I am no expert in this field, over time, I have found out these strategies, which have really helped me receive the responses I expect from LLMs. I learned about the first trick from the Mindstone Toronto AI Meetup and the second from a meetup at Microsoft Reactor.
1. Encourage the LLM to Ask Questions
One common tendency of LLMs is to provide responses based on the user's prompts, even if they have to assume certain data, requirements, or responsibilities. As users, we sometimes forget to include all the necessary details, which can lead to unexpected results and frustration. However, there is a workaround.
When I send my prompt, I always include the following line at the end:
“Please do not assume any requirements. Let me know if you have any questions to clear any confusion.”
Adding this small phrase works wonders. The LLM often responds by asking thought-provoking questions, helping me realize that I might have missed providing essential requirements. So, consider adding this phrase to your prompts.
2. Include the 3 Rs in Your Prompts
The second trick focuses on what should ideally be included in a prompt. According to some experts, every prompt should encompass the 3 Rs: Role, Responsibility, and Requirements.
- Role: Specify the role you want the LLM to assume to provide a perfect response.
- Responsibility: Include the responsibilities that the LLM needs to fulfill while assuming that specific role.
- Requirements: Clearly state the expected format of the response from the LLM.
Example
Let's assume I want the LLM to help me with some technical documentation. Here’s the prompt I would use to achieve the expected response:
Hello there. Please act as a Technical Writer.
Go through the existing documentation of the project.
[Existing documentation]
Please study the format in which I write the project documentation. Following are some new APIs I have developed.
[New APIs]
Add the documentation for these new APIs into the existing documentation. Keep the format the same as the existing documentation.
Return me a file in markdown format that includes both the existing documentation and the documentation for the new APIs.
Please do not assume any requirements. Let me know if you have any questions to clear any confusion.
Final Words
This is how I prompt LLMs, and so far, it has been effective. There are still more tricks in Prompt Engineering; as the name suggests, it is a field with considerable depth. What prompt engineering techniques have you found most effective when working with LLMs? Share your tips in the comments!
Citation
I would like to acknowledge that I took help from ChatGPT to structure my blog and simplify content.
Top comments (0)