Building a personal ChatGPT CLI is super simple! Who knew you could have a whole conversation in the terminal!
To get started, you’ll need an OpenAI account and tokens to burn. That's it! We’ll cover:
- Install and setup
- Initialize the project
- Build your ChatGPT clone
If you don't like reading, check out the Youtube video instead:
Install and Setup
First, let’s focus on setting everything up.
Note: I am going to assume you have Python and Home Brew installed, but if not, you can download them at https://python.org and https://brew.sh/.
Create a Python environment
Python environments can be confusing, so I’d strongly recommend you use Pyenv. It’s a tool for managing all your Python installations.
brew install pyenv
I’m doing this on a mac, but if you’re on a different OS like Windows or Linux, follow the instructions on Pyenv’s repo for your system before continuing.
curl https://pyenv.run | bash
It’s common in software development to create virtual environments to isolate the project and manage its dependencies. We’ll use Pyenv-evirutalenv to create this virtual environment in a moment.
For macOS:
brew install pyenv-virtualenv
For linux:
pip install virtualenv
For Windows:
virtualenv --python C:\Path\To\Python\python.exe venv
Install dependencies
Next, we’ll need several dependencies for the project. Run this install command to install each of these packages globally on your system:
pip install typer openai python-dotenv
Here’s a brief explanation for each package
- Typer: a tool for writing command line applications in Python
- OpenAI: handles all things OpenAI
- Python-dotenv: a tool for reading and loading from .env files
Initialize the Project
Start by creating and navigating to an empty directory called “chatgptclone.”
mkdir chatgptclone
cd chatgptclone
Next, let’s create a virtual environment. Again, virtual environments isolate different Python installations and dependencies.
pyenv virtualenv 3.11 chatgptclone
This command generates a virtual environment called ChatGPTClone using Python 3.11. If you get an error, run pyenv install 3.11
to install Python 3.11 in Pyenv.
Now, let’s activate it!
pyenv activate chatgptclone
Build Your ChatGPT Clone
With everything installed and your Python environment activated, it's time to build. We’ll do this in three stages:
- Stage 1: Create a basic version
- Stage 2: Add memory to the program
- Stage 3: Improve the CLI experience
Stage 1: Create a Basic Version
To start, we’ll need two files: main.py
and .env
.
1. Create a file in your directory called “main.py.”
At the top of the file, import all the dependencies we downloaded earlier along with the os package from Python. Your main.py file should look like this.
import os
import typer
import openai
from dotenv import load_dotenv
Note:
os
stands for “operating system” and is a built-in Python package that will help load the API key securely.
2. Create a .env file to hold the API key.
Login to your OpenAI account, go to API keys, and click on generate a new secret key.
Copy the key that you get to your clipboard. In your .env file, type OPENAI_KEY="YOUR_API_KEY
".
Note: Ensure you have credit balance (under Billing) to make API requests to OpenAI, since each request will require available credits.
3. Initialize the CLI application.
Next, set the API_KEY for openAI and create an application object called app.in the main.py
file. Your code should look like this:
import os
import typer
import openai
from dotenv import load_dotenv
load_dotenv()
opanai.api_key = os.getenv("OPENAI_KEY")
app = type.Typer()
Awesome! We’re all set up, so let’s start making magic happen!
4. Create the chat function.
Create a function called interactive_chat
and then welcome your users with the echo command from Typer recognizes commands by using decorators that go on top of a function. Then we just add the classic loop in Python at the bottom.
Here’s what we have so far:
import os
import typer
import openai
from dotenv import load_dotenv
load_dotenv()
openai_key = os.getenv("OPENAI_KEY")
app = typer.Typer()
@app.command()
def interactive_chat():
"""Interactive CLI tool to chat with ChatGPT."""
typer.echo(
"Starting interactive chat with ChatGPT. Type 'exit' to end the session."
)
if __name__ == "__main__":
app()
5. Add the chat interaction loop.
Since we’re building a chat application, we’ll need to add an infinite loop within the interactive_chat
function that prompts the user for a chat input and calls the OpenAI ChatCompletion model.
def interactive_chat():
"""Interactive CLI tool to chat with ChatGPT."""
typer.echo(
"Starting interactive chat with ChatGPT. Type 'exit' to end the session."
)
while True:
prompt = input("You: ")
if prompt == "exit":
typer.echo("ChatGPT: Goodbye!")
break
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo", messages=[{"role":"user", "content":prompt}]
)
Note: if you get a billing error (like this), you may have to wait 5-10 minutes for the Billing information you just entered to be recognized by OpenAI.
On success, OpenAI will return a JSON object, which we’ll use to echo back the response message in the CLI application we’re building.
typer.echo(f'ChatGPT: {response["choices"][0]["message"]["content"]}')
Here’s the current interactive_chat
function code:
def interactive_chat():
"""Interactive CLI tool to chat with ChatGPT."""
typer.echo(
"Starting interactive chat with ChatGPT. Type 'exit' to end the session."
)
while True:
prompt = input("You: ")
if prompt == "exit":
typer.echo("ChatGPT: Goodbye!")
break
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo", messages=[{"role":"user", "content":prompt}]
)
typer.echo(f'ChatGPT: {response["choices"][0]["message"]["content"]}')
Run python main.py
in your terminal and the chatbot will answer your questions!
Stage 2: Add memory to the program
While working, the chatbot currently has no memory of previous messages.
Currently, when calling the create
function, we’re passing it an array of messages, which gives ChatGPT context. To give it all our context (and not only the current prompt) let’s make four changes:
- Define an empty list called
messages
. - Append new prompts to the messages list.
- Append the response from OpenAI to the messages list.
- Send the entire messages list to the create function.
Here’s the current interactive_chat
function code:
def interactive_chat():
"""Interactive CLI tool to chat with ChatGPT."""
typer.echo(
"Starting interactive chat with ChatGPT. Type 'exit' to end the session."
)
messages = []
while True:
prompt = input("You: ")
messages.append({"role":"user", "content":prompt})
if prompt == "exit":
typer.echo("ChatGPT: Goodbye!")
break
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo", messages=messages
)
typer.echo(f'ChatGPT: {response["choices"][0]["message"]["content"]}')
messages.append(response["choices"][0]["message"])
Run python main.py
in your terminal and not your chatbot has a memory! Awesome!
Stage 3: Improve the CLI experience.
So, ChatGPT works exactly as we expect now. But… let’s make it even better.
Right now all of the OpenAI functionality is hard coded in the API call, so you’d have to go into the code to tweak the algorithm.
Add Customization parameters.
We can add a lot of customization as parameters in the CLI, like:
- temperature (how creative ChatGPT is)
- max_tokens (how large the responses will be)
- model (which OpenAI model it should use)
You can set user parameters with the typer.Option
. Let’s add all three of these parameters in the function definition along with help messages.
Here’s the current interactive_chat
function definition code:
def interactive_chat(
temperature: float = typer.Option(0.7, help="Control Randomness. Defaults to 0.7"),
max_tokens: int = typer.Option(
150, help="Control length of response. Defaults to 150"
),
model: str = typer.Option(
"gpt-3.5-turbo", help="Control the model to use. Defaults to gpt-3.5-turbo"
),
):
Running python main.py --help
will return options for each parameter. Very professional!
To use these parameters, change the API call like this:
response = openai.ChatCompletion.create(
model=model,
messages=messages,
max_tokens=max_tokens,
temperature=temperature,
)
You can now change any of these parameters when running the application.
python main.py --max-tokens 10
You should receive a response with only 10 tokens, meaning the max-tokens parameter worked!
Add your initial question on call.
Let’s make one final change. Let's add a text parameter that can let us ask our first message immediately with the --text
or -t
flag.
Here’s the updated interactive_chat
function definition code:
def interactive_chat(
text: Optional[str] = typer.Option(None, "--text", "-t", help="Start with text"),
temperature: float = typer.Option(0.7, help="Control Randomness. Defaults to 0.7"),
max_tokens: int = typer.Option(
150, help="Control length of response. Defaults to 150"
),
model: str = typer.Option(
"gpt-3.5-turbo", help="Control the model to use. Defaults to gpt-3.5-turbo"
),
):
Now in the prompt logic, check if there is text that exists. That's it!
Final Code!
Just like that, we made a ChatGPT clone!
import os
import typer
import openai
from dotenv import load_dotenv
from typing import Optional
load_dotenv()
openai.api_key = os.getenv("OPENAI_KEY")
app = typer.Typer()
@app.command()
def interactive_chat(
text: Optional[str] = typer.Option(None, "--text", "-t", help="Start with text"),
temperature: float = typer.Option(0.7, help="Control Randomness. Defaults to 0.7"),
max_tokens: int = typer.Option(
150, help="Control length of response. Defaults to 150"
),
model: str = typer.Option(
"gpt-3.5-turbo", help="Control the model to use. Defaults to gpt-3.5-turbo"
),
):
"""Interactive CLI tool to chat with ChatGPT."""
typer.echo(
"Starting interactive chat with ChatGPT. Type 'exit' to end the session."
)
messages = []
while True:
if text:
prompt = text
text = None
else:
prompt = typer.prompt("You")
messages.append({"role": "user", "content": prompt})
if prompt == "exit":
typer.echo("ChatGPT: Goodbye!")
break
response = openai.ChatCompletion.create(
model=model,
messages=messages,
max_tokens=max_tokens,
temperature=temperature,
)
typer.echo(f'ChatGPT: {response["choices"][0]["message"]["content"]}')
messages.append(response["choices"][0]["message"])
if __name__ == "__main__":
app()
Check out my source code if you have any questions.
Top comments (1)
💛🌴