This blog post was written for Twilio and originally published on the Twilio blog.
ChatGPT went viral recently! This conversational machine learning (ML) chatbot developed by OpenAI can answer questions, admit its mistakes, challenge incorrect premises, generate stories and poetry, and more. Read on to learn how to build a serverless SMS chatbot using ChatGPT and the OpenAI API, Twilio Programmable Messaging, the Twilio Serverless Toolkit, and Node.js.
You can test this chatbot out yourself by texting a question or prompt (like in the GIF above) to +17622490430.
Do you prefer learning via video more? Check out this TikTok summarizing this tutorial in under three minutes!
ChatGPT and Node.js
GPT-3 (short for “Generative Pre-training Transformer 3”) is a natural language processing (NLP) model trained on human-generated text. Given some text input, it can generate its own human-like text based in a variety of languages and styles. Here, I ask it to "give me some bars about SendGrid."
You can test out ChatGPT in the browser here.
The GPT-3 model uses a transformer architecture. This multi-layer neural network is good for processing sequential data, like text. Language-related tasks it can perform include translation, summarization, and question answering, as well as text generation comparable to human text generation.
To use ChatGPT in a Node.js application, you must use the OpenAI API.
Prerequisites
- A Twilio account - sign up for a free Twilio Account here
- A Twilio Phone Number with SMS capabilities - learn how to buy a Twilio Phone Number here
- OpenAI Account – make an OpenAI Account here
- Node.js installed - download Node.js here ### Get Started with OpenAI After making an OpenAI account, you'll need an API Key. You can get an OpenAI API Key here by clicking on + Create new secret key. Save that API key for later to use the OpenAI client library in your Twilio Function. ### Get Started with the Twilio Serverless Toolkit The Serverless Toolkit is CLI tooling that helps you develop Twilio Functions locally and deploy them to Twilio Functions & Assets. The best way to work with the Serverless Toolkit is through the Twilio CLI. If you don't have the Twilio CLI installed yet, run the following commands on the command line to install it and the Serverless Toolkit:
npm install twilio-cli -g
twilio login
twilio plugins:install @twilio-labs/plugin-serverless
Afterwards, create your new project and install our lone requirement openai
:
twilio serverless:init chatgpt-sms --template=blank
cd chatgpt-sms
npm install -s openai
Set an Environment Variable with Twilio Functions and Assets
Open up your .env file for your Functions project in your root directory and add the following line:
OPENAI_API_KEY=YOUR-OPENAI-API-KEY
Replace YOUR-OPENAI-API-KEY
with the OpenAI API Key you took note off earlier. Now you can access this API Key if you'd like to do so in your code with context.OPENAI_API_KEY
.
Make a Twilio Function with JavaScript
Make a new file in the /functions directory
called chatgpt.js
containing the following code:
const { Configuration, OpenAIApi } = require("openai");
exports.handler = async function(context, event, callback) {
const twiml = new Twilio.twiml.MessagingResponse();
const inbMsg = event.Body.toLowerCase().trim();
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY
});
const openai = new OpenAIApi(configuration);
const response = await openai.createCompletion({
model: "text-davinci-003",
prompt: inbMsg,
temperature: 0.7, //A number between 0 and 1 that determines how many creative risks the engine takes when generating text.
max_tokens: 3000, // Maximum completion length. max: 4000-prompt
frequency_penalty: 0.7 // # between 0 and 1. The higher this value, the bigger the effort the model will make in not repeating itself.
});
twiml.message(response.data.choices[0].text);
callback(null, twiml);
};
This code imports openAI
and makes an async function containing a Twilio Messaging Response object and a variable inbMsg
from the inbound text message users will text in. It then initializes a Configuration
object and passes it an object to the Configuration
constructor containing the property apiKey
. The function then calls the openai.createCompletion
function to use one of their language models to generate text based on inbMsg
.
In order to specify our completion, you need to pass in a configuration object containing two properties: model
and prompt
. model
identifies the OpenAI language model used to generate an answer for the text which we assign to the prompt
property. In this tutorial, you'll use the text-davinci-003
language model. It's the same language model used in the background by ChatGPT. The OpenAI docs list the other language models they offer for use.
Optional properties max_tokens
and frequency_penalty
specify the maximum completion length and the effort the model will make to not repeat itself. You can see more optional properties for completion here in the OpenAI documentation.
You can view the complete source code on GitHub here.
Configure the Function with a Twilio Phone Number
To deploy your app to Twilio, run twilio serverless:deploy
from the chatgpt-sms root directory. You should see the URL of your Function at the bottom of your terminal:
Grab the Function URL corresponding to your app (the one that ends with /chatgpt) and configure a Twilio Phone Number with it as shown below: select the Twilio number you just purchased in your Twilio Phone Numbers console and scroll down to the Messaging section. Paste the link in the text field for A MESSAGE COMES IN webhook making sure that it's set to HTTP POST. When you click Save, it should look like this!
The Service is the Serverless project name, environment provides no other options, and Function Path is the file name. Now take out your phone and text a question or prompt to your Twilio number.
What's Next for Twilio Serverless and ChatGPT?
The development possibilities offered by ChatGPT and Twilio are endless! You can build an SMS chatbot with Python, call an AI friend, chat with an AI chef over WhatsApp, and more. Let me know what you're working on with OpenAI–I can't wait to see what you build.
Top comments (4)
The AI that is in buzz. #chatGPT
This articles seems to be referring to ChatGPT and the GPT3 API interchangeably. They aren't the same thing. ChatGPT uses a newer model than GPT3 and there are other differences.
They are not the same thing! ChatGPT uses the model text-davinci-003, which was released alongside it. This article also uses the text-davinci-003 model.
Yes, they are not the same thing. ChatGPT was trained on a different dataset based on conversational text, and has 20 billion parameters vs GPT-3's 175 billion. My point was the article doesn't make the distinction clear.