DEV Community

Cover image for The Easiest Way to Run Llama 3 Locally
Marko Vidrih
Marko Vidrih

Posted on

The Easiest Way to Run Llama 3 Locally

Running large language models (LLMs) on your own computer is now popular because it gives you security, privacy, and more control over what the model does. In this mini tutorial, we'll learn the simplest way to download and use the Llama 3 model.
Llama 3 is Meta AI's latest LLM. It's open-source, has advanced AI features, and gives better responses compared to Gemma, Gemini, and Claud 3.

What is Ollama?

Ollama is an open-source tool for using LLMs like Llama 3 on your computer. Thanks to new research, these models don't need a lot of VRAM, computing power, or storage. They are designed to work well on laptops.

There are many tools for using LLMs on your computer, but Ollama is the easiest to set up and use. It lets you use LLMs directly from a terminal or PowerShell. It's fast and has features that let you start using it right away.

The best thing about Ollama is that it works with all kinds of software, extensions, and applications. For example, you can use the CodeGPT extension in VSCode and connect Ollama to use Llama 3 as your AI code assistant.

Installing Ollama

  1. Download and install Ollama from its GitHub repository (Ollama/ollama).

  2. Scroll down and click the download link for your operating system.

Image description

  1. After installing Ollama, it will show in your system tray.

Image description

Downloading and Using Llama 3

To download and start using the Llama 3 model, type this command in your terminal/shell:

ollama run llama3

It will take about 30 minutes to download the 4.7GB model, depending on your internet speed.

Image description

You can also install other LLMs by typing different commands. Once the download is finished, you can use Llama 3 locally just like using it online.

Image description

Prompt Example: "Describe a day in the life of a Data Scientist."

Image description

To show how fast it works, here's a GIF of Ollama generating Python code and explaining it.

Prompt Example: "Write a Python code for building the digital clock."

Image description

Note: If your laptop has an Nvidia GPU and CUDA installed, Ollama will use the GPU instead of the CPU, making it 10 times faster.

You can exit the chat by typing /bye and start again by typing ollama run llama3.

Final Thoughts

Open-source tools and models have made AI and LLMs accessible to everyone. Instead of being controlled by a few companies, tools like Ollama let anyone with a laptop use AI.

Using LLMs locally gives you privacy, security, and control over responses. Plus, you don't have to pay for a service. You can even create your own AI coding assistant and use it in VSCode.

Top comments (0)