Artificial intelligence (AI) is rapidly becoming one of the most promising technologies, particularly with the introduction of generative AI chatbots. OpenAI launched its first AI chatbot in November 2022 and within a short couple of months, it has become the fastest-growing app in history. Who would have known that one of the most transformative technologies to be introduced in 2022 would be a simple text box to chat with a computer program that can give you an answer to almost any question that you ask it? Many industries have shifted their strategies to incorporate and adopt more AI technologies into their core services and products.
It can be difficult to wrap your head around the rapid developments happening in the AI space, and many questions arise about how to best use AI to improve services. In this blog post, we'll explore the building blocks of how to build a simple NodeJS application to chat with ChatGPT using TypeScript. This is going to be a command-line interface to interact with the chatbot.
What you will need to get started:
- Node.js version 18 or newer
- IDE of choice (e.g. Visual Studio Code)
1. Retrieve your API key from OpenAI
In order to get started, you will need to create an API key to use ChatGPT from your Node.js application. Navigate to https://platform.openai.com. If you’ve used ChatGPT before, you will have already created an OpenAI account and can log in immediately. If not, create your account first to get started.
Once you’re signed in, click on the top right corner of your profile page to open up a dropdown menu. Select the option “View API keys”.
A settings page will be shown in which you can generate a new secret key. Clicking on this button will trigger a modal to pop up displaying your new API key. Make sure to copy this value and store it in a safe place. This will be needed for the Node.js application.
Are there costs for using the API?
Generating an API key, which is required to access the OpenAI API, can be done for free. As of the writing of this article, OpenAI provides users with a $5 credit that can be used to test their API without incurring any extra costs. If you wish to continue using the API after your free credits have been depleted, you will need to enter your billing information. It's important to keep in mind that OpenAI offers different pricing plans based on usage. Ensure your usage stays within budget if you decide to pay for a pricing plan.
2. Set up a project with the required dependencies
For our Node.js application, we’re going to create a directory to get started. We’re going to call this directory chatgpt-typescript
:
mkdir chatgpt-typescript
Now we can navigate into the newly created directory to create the src
directory to store our source code and initialize an npm project:
cd chatgpt-typescript
mkdir src
npm init -y
The flag -y
means that we’re going to accept the defaults for creating the npm project.
Now we need to install the production and development dependencies of our project:
npm install openai dotenv chalk
npm install -D typescript @types/node ts-node
The dependency openai is package that provides wrapper methods to access the OpenAI API directly. Dotenv is a package that we use to easily load in environment variables. Finally, chalk is a package that we use to add beautifully colored styling to the terminal.
3. Set up the TypeScript compiler
To get started with TypeScript in our project, we need to initialize a TypeScript configuration file. We can do this with the following command:
npx tsc --init
The TypeScript compiler will create a tsconfig.json
file with some default values. Replace the contents of that file with the following:
{
"compilerOptions": {
"target": "es2017",
"module": "ESNext",
"moduleResolution": "nodenext",
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"strict": true,
"skipLibCheck": true,
"rootDir": "./src",
"outDir": "./build",
}
}
This is the file that we’ll be using for this blog post. We’ve defined that we will be using the modern ESM modules and that our root directory is inside ./src
.
Inside our TypeScript files, we want to be able to use the import
syntax. To do that, we need to instruct the compiler to use JavaScript modules. We are also going to add a build script and a start script to the package.json file. This will be useful when we want to play with our command-line interface. Add the following lines to your package.json
file:
{
...
"type": "module",
"scripts": {
"build": "tsc",
"start": "npm run build && node build/index.js"
},
...
}
4. Add your API key to the .env file
The dependency dotenv
that we installed allows Node to load environment variables from the .env
file into its global variable called process.env
. The API key that we generated, needs to be loaded into our Node.js application. We don’t add this API key directly into our TypeScript file, but place this inside .env
file.
Create a .env
file inside your root directory and add the following line to it:
OPENAI_API_KEY=<<replace-with-your-openai-api-key>>
Make sure to replace the placeholder with the contents of your API key.
5. Write your first Chatbot in TypeScript
Now it’s time to write our code to make an API call. Create an index.ts
in the src
directory.
The first thing that we’re going to do is add the following import lines for the required packages:
import { Configuration, OpenAIApi, ChatCompletionRequestMessage } from 'openai';
import * as readline from 'readline';
import * as dotenv from 'dotenv';
import chalk from 'chalk';
The next step is to load in the OpenAI API key. We need the this key to make API calls to OpenAI to interact with ChatGPT.
// Load OpenAI API key from environment variable
dotenv.config();
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
We need to initialize two objects. A messages array to store the history of our interaction with the chatbot. This helps the chatbot to keep the history of your messages in memory when giving you back a response. The second object that we need to initialize is the userInterface
object. We use this object to interact with the command-line interface.
// Initialize messages array
const messages: ChatCompletionRequestMessage[] = [];
// Initialize readline interface
const userInterface = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
Now that we have initialized the userInterface
object, we can set the prompt that the user sees when their input is required with .setPrompt()
. After that, we add the .prompt()
method so that the interface is going to wait for the user to add their input.
// Set prompt for user
userInterface.setPrompt(`\n${chalk.blue('Send a message:')}\n`);
userInterface.prompt();
When you see the message Send a message:
, the user can enter a prompt for the chatbot. We want the chatbot to receive this prompt when the user presses the Enter button. We can do this with the following lines of code:
userInterface.on('line', async (input) => {
// Create request message and add it to messages array
const requestMessage: ChatCompletionRequestMessage = {
role: 'user',
content: input,
};
messages.push(requestMessage);
// Call OpenAI API to generate response
const completion = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
messages: messages,
});
// Display response message to user
const responseMessage = completion.data.choices[0].message;
if (responseMessage) {
console.log(chalk.green(responseMessage.content));
messages.push({
role: responseMessage.role,
content: responseMessage.content,
});
}
// Prompt user for next message
userInterface.prompt();
});
When the user presses Enter in the command-line interface, the program triggers the line
event and extracts the user’s input from the input
variable. The program then creates a ChatCompletionRequestMessage
to store the input and stores it in the messages
array. This message’s role
property is set to user
to indicate that it represents a user’s input. When ChatGPT generates a response, the program creates another ChatCompletionRequestMessage
with the role
property set to assistant
to indicate that it presents the program’s response.
Finally, the program is going to make an API call to OpenAI generate a response based on the user’s input. This response is stored in the messages array and will be displayed in the command-line interface.
Optionally, when the user quits the program, we can display an exit message:
// Handle program exit
userInterface.on('close', () => {
console.log(chalk.blue('Thank you for using this Demo'));
});
The complete code with all the required interactions should look like this:
import { Configuration, OpenAIApi, ChatCompletionRequestMessage } from 'openai';
import * as readline from 'readline';
import * as dotenv from 'dotenv';
import chalk from 'chalk';
// Load OpenAI API key from environment variable
dotenv.config();
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
// Initialize messages array
const messages: ChatCompletionRequestMessage[] = [];
// Initialize readline interface
const userInterface = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
// Set prompt for user
userInterface.setPrompt(`\n${chalk.blue('Send a message:')}\n`);
userInterface.prompt();
userInterface.on('line', async (input) => {
// Create request message and add it to messages array
const requestMessage: ChatCompletionRequestMessage = {
role: 'user',
content: input,
};
messages.push(requestMessage);
// Call OpenAI API to generate response
const completion = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
messages: messages,
});
// Display response message to user
const responseMessage = completion.data.choices[0].message;
if (responseMessage) {
console.log(chalk.green(responseMessage.content));
messages.push({
role: responseMessage.role,
content: responseMessage.content,
});
}
// Prompt user for next message
userInterface.prompt();
});
// Handle program exit
userInterface.on('close', () => {
console.log(chalk.blue('Thank you for using this Demo'));
});
6. Interact with your chatbot 🤖
Now it’s time for the moment of truth! Run the following command in your terminal to start up your chatbot:
npm run start
This command is going to transpile your TypeScript code to JavaScript and run the compiled code in build/index.js
. You can now start interacting with ChatGPT using your TypeScript code. Ask it anything that you want. An example of this interaction looks like this:
Takeaway
ChatGPT is a powerful tool that can be leveraged by developers to add chatbot functionality to their existing applications. With the new wave of technological innovations happening in AI, learning to use these tools can help you to understand these trends better and provide users with an enhanced experience. By following this tutorial, you can see that it’s quite straightforward to get started with a simple example of connecting to ChatGPT. The next step for you could be to think of ways to integrate a ChatGPT-powered chatbot into your current services or workflows. Happy coding! 😎
The complete source code can be found here.
Building Your First Chatbot using ChatGPT with TypeScript in NodeJS
The tutorial for this project can be found on Medium and dev.to.
Running locally
When running this locally, make sure to first create a .env
file and place your OpenAI API key in there. You can also check step 4 of my tutorial on Medium or dev.to for more background information on this.
If the content was helpful, feel free to support me here:
Top comments (0)