Who doesn’t appreciate a solid FAQ section? Those little gems of information help users find answers quickly without flooding your inbox with questions. But what if I told you we could automate the FAQ creation process? Even better, we can add some AI flair, link it all to a sleek Next.js app, and keep everything in check with rate limiting using Unkey. Plus, we’ll make sure the API doesn’t munch through your monthly token limit!
And it's live here and Open-Source here (don't forget to drop a ⭐️ ;)).
Sound good? Let’s dive in.
The Challenge: Automating FAQs (While Keeping Costs Low)
Imagine you’ve just launched a cool new app, and users are firing off questions like, "What’s this app all about?" or "How does it work?" You can sit down and write answers to all these questions, but what if the app could just generate FAQs for you? Sounds good, right?
Enter OpenAI’s GPT-4 (or 3.5 if you’re on a budget) to create intelligent, auto-generated FAQs. But there’s a catch: OpenAI’s API isn’t free, and unless you're sitting on piles of money, you're working with a limited budget. You need a smart way to manage how many API requests your users can make.
This is where Unkey comes in. It's a simple API Management tool that provides rate-limiting to prevent any single user from overwhelming your system (or your wallet). Let's see how I built it all!
Step 1: Setting Up Next.js and the OpenAI API
First things first: Next.js is our go-to framework for this project. It’s the ship-fasters favorite (yeah, I just created a word :)).
Here’s a quick breakdown of how we handle the AI magic. In our API route, we send a request to OpenAI’s GPT-4, asking it to generate a list of FAQs based on a topic.
import { NextResponse, NextRequest } from "next/server";
import rateLimit from "@/lib/unkey";
import openai from "@/lib/openai";
import { parseFaqResponse } from "@/lib/responseParser";
export async function POST(request: NextRequest): Promise<NextResponse> {
const { topic } = await request.json();
// Make sure the 'topic' parameter is provided
if (!topic || typeof topic !== "string") {
return NextResponse.json(
{ error: "Missing or invalid 'topic' parameter" },
{ status: 400 }
);
}
// Get the user's IP address
const ip = request.headers.get("x-forwarded-for") || "anonymous";
// Verify the rate limit
const rateLimitResponse = await rateLimit.limit(ip, { cost: 2 });
if (!rateLimitResponse.success) {
// Return a 429 status response if the limit is exceeded
return NextResponse.json(
{ message: "API rate limit exceeded. Try again later" },
{ status: 429 }
);
}
try {
// Generate FAQ answer using GPT-4o
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [
{
role: "system",
content:
"PROMPT",
},
{
role: "user",
content: `Here is the topic: "${topic}"`,
},
],
});
const answer = parseFaqResponse(response);
return NextResponse.json(answer, { status: 200 });
} catch (error) {
return NextResponse.json(
{ error: "Error generating FAQ" },
{ status: 500 }
);
}
}
In this API endpoint, we take in the topic
and the user ip address
(to check rate limits). We send the topic to GPT-4, ask it for some FAQs, and then return a beautifully formatted JSON response. And if the user goes crazy with requests? Unkey steps in and says, "Whoa there, slow down!"
Step 2: Rate Limiting with Unkey – Because No One Likes an Overachiever
You might think rate limiting is boring, but it’s a lifesaver, especially when you’re using paid services like OpenAI. Unkey is simple and handles this gracefully.
Here's what makes Unkey awesome:
- It's dead simple to use (seriously, you don’t need to know how to center a div).
- Friendly pricing – Unkey doesn’t break the bank, and it keeps your OpenAI API budget in check.
In our setup, we allow 2 requests per 10s, per user. After that, Unkey jumps in and throws a friendly "Rate limit exceeded" message at them within that 10s window. This keeps your API from getting overwhelmed and prevents your token count from going over budget. Here is what the unkey.ts
file looks like:
import { Ratelimit } from "@unkey/ratelimit";
const unkey = new Ratelimit({
rootKey: process.env.UNKEY_ROOT_KEY!,
duration: "10s",
limit: 2,
namespace: "askiq",
async: true,
});
export default unkey;
Step 3: Error Handling – Because Things Go Wrong Sometimes
Building an API isn’t just about making things work—it's about making sure things don’t break when users do the unexpected.
Here are some of the things we check:
- Missing
topic
? We send a 400 error. - Rate limit exceeded? We send a 429 error.
- Anything else? We send a 500 error and say, “Oops, something went wrong.”
It’s all about keeping things smooth and user-friendly.
The Result: A Developer-Friendly FAQ Generator
At the end of the day, what do we have? A sleek, rate-limited FAQ generator that developers can use to get auto-generated FAQs about any topic they want, powered by OpenAI’s GPT-4 called ASKIQ.
So there you have it: a slick FAQ generator that uses AI to save you time, powered by an efficient rate-limiting system. Want to keep your API in check? Give Unkey a spin!
Now, off you go to build your own open-source project or take this one to the next level and you might as well leave a star since you're there. Happy coding!
Top comments (0)