DEV Community

Cover image for How to Install and Run Uncensored ChatGPT on Your PC (Offline): A Step-by-Step Guide
Ijeoma M. Jahsway
Ijeoma M. Jahsway

Posted on • Originally published at cn.coursearena.com.ng

How to Install and Run Uncensored ChatGPT on Your PC (Offline): A Step-by-Step Guide

Hey, I know you've heard of Chat GPT, y'know, the super intelligent AI from Open AI... Yeah, that one, What if I told you that you could have it running on your PC completely offline... Pretty crazy right? But it's true, allot of people and companies have put in allot of time, resources and energy into making this possible, and I am going to show you how you can download and run these LLMs (Large Language Models) on you PC completely offline.

If you're interested in experimenting with Large Language Models (LLMs) locally on your PC, or you just want to have the power of artificial intelligence at your disposal, then you're in the right place. In this guide, I will show you how to install Ollama and AnythingLLM, configure your environment, and get your model up and running. This guide is tailored for Windows users but can be adapted for other operating systems.

Requirements

  • You'll need a PC... Obviously.
  • Fast and stable internet connection
  • Ollama running on your PC
  • An LLM
  • And AnythingLLM

Let's move on.

Steps

1. Installing Ollama and Adding it to Your Environmental Path (For Windows).

This is so you can run Ollama commands directly from your command line.

Step 1: Download Ollama

  • Visit the Ollama website and navigate to the downloads section.
  • Choose the Windows installer and download it to your PC.

Step 2: Install Ollama

  • Run the downloaded installer.
  • Follow the installation wizard to complete the setup.

Once installed, open the file installation location in your file manager. Just search for Ollama, then open file location. Once you're there, you should see a folder or file named Ollama, if you do, you're in the right place. Just copy the path to the directory you're currently in and move on to the next step.

Step 3: Add Ollama to Your Environmental Path

  1. Open the Start Menu and search for "Environment Variables."
  2. Select "Edit the system environment variables."
  3. In the System Properties window, click on "Environment Variables."
  4. In the Environment Variables window, find and select the "Path" variable under System variables, then click "Edit."
  5. Click "New" and add the path to the Ollama installation directory which you copied previously (e.g., C:\User\Program Files\Programs). This means the Ollama file or folder is in this directory.
  6. Click "OK" to close all windows.

Step 4: Verify Installation

Open Command Prompt and type ollama --version to ensure Ollama is installed correctly and is accessible from any directory.

Alright. You have Ollama installed, and you system now recognizes ollama as a valid system command. It's now time to download an LLM. Yeah that's right, we're about to download our own preferred GPT. Yeah, you heard that right. "Preferred".

We all know how most AIs available through Open AI, Gemini, Bing, Copilot and others have this guideline that prevents them from y'know, going all out. These guidlines are programmed into them by their creators so they can't be used for illegal and criminal acts. And though this is good, it greatly limits the extent to which you can use the power of AI. Plus, not everyone would want to use AI for illegal stuff, that's why I am going to show you how to get an Uncensored LLM.These Uncensored LLMs have been trained with all those guidelines removed, and won't hold back when responding to your prompts. And the one we're going to install is called Dolphin-llama3. Although there are many others, including LLama3, which is trained by Meta on an insane amount of data and of course, it's censored. So whichever your preference is, just go with it. But where's the fun in have guidelines tell you what and what not to do, and that's why we're going with Dolphin.

Dolphin LLM

2. Downloading Dolphin-llama3

  1. Go to the Ollama Website.
  2. Look to the upper right and clock on Models.
  3. Search for Dolphin-llama3

  4. Click it and copy the ollama run command for dolphin-llama3. (Note that depending on your systems capabilities, you could either run the 8B data size model, 70B and others. Unless you have a really powerful computer, let's just stick with the 8B model.

dolphin-llama3

  1. Open your command line and type ollama run dolphin-llama3. You'll see the installation start so just wait for it to finish.

Is your LLM done downloading? Congratulations! You now have your own personal Uncensored AI to do your bidding. Try it. Type in a prompt and get the response of your dreams.

command prompt

Disclaimer: I do not recommend any of the actions listed in this image...

But you get the point right? No limitations. In comparison to Other Censored models...

gpt response

Now that you have your LLM up and running, you're good to go right? Not quite. You see, not everyone will be comfortable with running their LLM fron the command line. Well, that's why we have AnythingLLM. They have provided us with a means of running arioud kinds of LLMs including Open AI's Chat GPT, But we want to run our own local model, so let's download and install AnythingLLM.

3. Downloading and Installing AnythingLLM

Step 1: Download AnythingLLM

  • Visit the AnythingLLM Official website.
  • Download the latest release suitable for your operating system.

Step 2: Install AnythingLLM

  • Follow the installation prompts and have it installed.

Step 3: Configuration

  • After installation, configure AnythingLLM according to your preferences.
  • Open AnythingLLM and click on Get Started.
  • We're running a local LLM through Ollama, so scroll through the list and click on Ollama
  • The setup should automatically detect the running Ollama model in your PC so just click the right arrow to continue.
  • Follow the prompts and name your workspace whatever you want, I'll name mine Dolphin-llama3
  • Proceed until you see a chat interface.

AnythingLLM chat interface

Nice right, you now have a chat interface where you can communicate with your LLM just like Open AIs Chat GPT, or Gemini. Your personal assistant. And the best part...It's completely offline.

Conclusion

By following these steps, you should have Ollama and AnythingLLM installed and running your local LLM on your PC. This setup allows you to explore and interact with LLMs directly from your PC, offering a great way to experiment with AI models without relying on cloud services. I hope you enjoyed this guide and you arre able to explore the capabilities of an Uncensored LLM.

Feel free to leave a comment if you encounter any issues during the setup process or have any questions!

Top comments (0)