Hello 🙋 first of all, Happy New Year! 🎉
TLDR
If you in hurry, below a mindmap to quickly consume the content. 🕒🥗
Click here to see the mind map in xmind
AI Coding Assistant
AI Code Assistants are rapidly gaining popularity in the tech industry. They are becoming an essential tool for programmers, providing assistance in writing code, debugging, and even generating code snippets. Mastering the use of an AI Code Assistant is becoming a necessary skill for modern developers.
There are several AI Code Assistants available in the market. GitHub Copilot, AWS Codewhisperer, Tabnine. There are many more tools available, each with its unique features and capabilities.
However, most of these tools come with their own set of limitations. Many of them are not free, although they often offer trial versions for users to test out their capabilities. Additionally, these tools typically work by sending your code to an external server, which might raise privacy concerns for some users. Lastly, these tools are generally limited to answering programming-related questions and may not be able to assist with other types of inquiries.
What is ollama?
Ollama is a user-friendly tool designed to run large language models (LLMs) locally on a computer. This means it offers a level of security that many other tools can't match, as it operates solely on your local machine, eliminating the need to send your code to an external server. Plus, being free and open-source, it doesn't require any fees or credit card information, making it accessible to everyone. 🥳
You can find more about ollama on their official website: https://ollama.ai/. It's designed to work in a completely independent way, with a command-line interface (CLI) that allows it to be used for a wide range of tasks. It's not just for coding - ollama can assist with a variety of general tasks as well.
One of the standout features of ollama is its library of models trained on different data, which can be found at https://ollama.ai/library. These models are designed to cater to a variety of needs, with some specialized in coding tasks. One such model is codellama, which is specifically trained to assist with programming tasks.
Even, you can train your own model 🤓
Run ollama locally
You need at least 8GB of RAM to run ollama locally.
Running ollama locally is a straightforward process. The first step is to install it following the instructions provided on the official website: https://ollama.ai/download.
If you are Windows user
If you are a Windows user, you might need to use the Windows Subsystem for Linux (WSL) to run ollama locally, as it's not natively supported on Windows. You can find instructions on how to install WSL on the Microsoft website: https://learn.microsoft.com/en-us/windows/wsl/install.
Once ollama is installed, the next step is to download the model that best fits your needs. For programming-related tasks, it's recommended to use the codellama model.
ollama pull codellama
After the model is downloaded, you can start using ollama.
ollama run codellama
How to integrate ollama with my editor?
Integrating ollama with your code editor can enhance your coding experience by providing AI assistance directly in your workspace. This can be achieved using the Continue extension, which is available for both Visual Studio Code and JetBrains editors. You can find the extension at https://continue.dev/.
Once the extension is installed, you'll need to configure it to work with ollama. This involves adding ollama to the extension's configuration file. In your home directory, look for the .continue folder (e.g., /Users/pciosek/.continue) and edit the config.json file. Add the ollama model to the "models" section as follows:
{
"models": [
{
"title": "CodeLlama",
"model": "codellama",
"provider": "ollama"
}
]
}
More information about this configuration can be found at https://continue.dev/docs/reference/Model%20Providers/ollama.
After updating the configuration, restart your editor for the changes to take effect. You should now see ollama listed as a model in the extension's sidebar. 🥳
Now you're ready to use ollama in your editor!
Two ways to use ollama in your editor
- Open the extension's sidebar and start the conversation.
- Inside code editor, select the code and press (cmd/ctrl) + M to start the conversation. Selected code will be use as a context for the conversation.
More about the extension can be found at https://continue.dev/docs/intro
Below an example of generating tests for a component
The extension do not support code completion, if you know extension that support code completion, please let me know in the comments. 🙏
Conclusion
AI Code Assistants are the future of programming. It's imporant the technology is accessible to everyone, and ollama is a great example of this. It's free, open-source, and runs locally on your machine, making it a great choice for developers looking for an AI Code Assistant that is both secure, free and easy to use. 🥳
Share your thoughts
What do you think about ollama? Do you use any other AI Code Assistants? Maybe did you use other models? Let me know in the comments below! 🙏
Top comments (2)
Just want to say as of now (Dec 2024) past few versions of Ollama can run on Windows natively and its very easy. Just setup and Install and can run same commands from Command line.
have you compared different models best suited for programming like - codellama or dolphin-misteral? which one worked best for you