I’m a huge fan of open source models, especially the newly release Llama 3. Because of the performance of both the large 70B Llama 3 model as well as the smaller and self-host-able 8B Llama 3, I’ve actually cancelled my ChatGPT subscription in favor of Open WebUI, a self-hostable ChatGPT-like UI that allows you to use Ollama and other AI providers while keeping your chat history, prompts, and other data locally on any computer you control.
My previous article went over how to get Open WebUI set up with Ollama and Llama 3, however this isn’t the only way I take advantage of Open WebUI. The other way I use it is with external API providers, of which I use three. I’ll go over each of them with you and given you the pros and cons of each, then I’ll show you how I set up all 3 of them in my Open WebUI instance!
External AIs
OpenAI
OpenAI can either be considered the classic or the monopoly. Their AI tech is the most mature, and trades blows with the likes of Anthropic and Google. Even though Llama 3 70B (and even the smaller 8B model) is good enough for 99% of people and tasks, sometimes you just need the best, so I like having the option either to just quickly answer my question or even use it along side other LLMs to quickly get options for an answer.
OpenAI is the example that is most often used throughout the Open WebUI docs, however they can support any number of OpenAI-compatible APIs. Here’s another favorite of mine that I now use even more than OpenAI!
Groq Cloud
Groq is an AI hardware and infrastructure company that’s developing their own hardware LLM chip (which they call an LPU). They offer an API to use their new LPUs with a number of open source LLMs (including Llama 3 8B and 70B) on their GroqCloud platform.
Their claim to fame is their insanely fast inference times - sequential token generation in the hundreds per second for 70B models and thousands for smaller models. Here’s Llama 3 70B running in real time on Open WebUI.
Here’s the best part - GroqCloud is free for most users. With no credit card input, they’ll grant you some pretty high rate limits, significantly higher than most AI API companies allow. Here’s the limits for my newly created account.
14k requests per day is a lot, and 12k tokens per minute is significantly higher than the average person can use on an interface like Open WebUI.
Using GroqCloud with Open WebUI is possible thanks to an OpenAI-compatible API that Groq provides. All you have to do is generate an API Key via the dashboard, change the URL in the dashboard to https://api.groq.com/openai/v1
, and it’ll work just like OpenAI’s API!
This is how I was able to use and evaluate Llama 3 as my replacement for ChatGPT!
Cloudflare Workers AI
This is the part where I toot my own horn a little. Using Open WebUI via Cloudflare Workers is not natively possible, however I developed my own OpenAI-compatible API for Cloudflare Workers a few months ago. I recently added the /models
endpoint to it to make it compable with Open WebUI, and its been working great ever since. The main advantage of using Cloudflare Workers over something like GroqCloud is their massive variety of models. This allows you to test out many models quickly and effectively for many use cases, such as DeepSeek Math (model card) for math-heavy tasks and Llama Guard (model card) for moderation tasks. They even support Llama 3 8B!
The main con of Workers AI is token limits and model size. Currently Llama 3 8B is the largest model supported, and they have token generation limits much smaller than some of the models available. I still think they’re worth having in this list due to the sheer variety of models they have available with no setup on your end other than of the API. If you want to set up OpenAI for Workers AI yourself, check out the guide in the README.
Adding External AIs to Open WebUI
Now, how do you add all these to your Open WebUI instance? Assuming you’ve installed Open WebUI (Installation Guide), the best way is via environment variables.
When running Open WebUI using Docker, you can set the OPENAI_API_BASE_URLS
and OPENAI_API_KEYS
environment variables to configure the API endpoints.
For example, to integrate OpenAI, GroqCloud, and Cloudflare Workers AI, you would set the environment variables as follows:
docker run -d -p 3000:8080 \
-v open-webui:/app/backend/data \
-e OPENAI_API_BASE_URLS="https://api.openai.com/v1;https://api.groq.com/openai/v1;https://openai-cf.yourusername.workers.dev/v1" \
-e OPENAI_API_KEYS="sk-proj-ABCDEFGHIJK1234567890abcdef;gsk_1234567890abcdefabcdefghij;0123456789abcdef0123456789abcdef" \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:main
Replace sk-proj-ABCDEFGHIJK1234567890abcdef
, gsk_1234567890abcdefabcdefghij
, and 0123456789abcdef0123456789abcdef
with your actual API keys. Make sure to put the keys for each API in the same order as their respective API. If you don’t, you’ll get errors saying that the APIs could not authenticate.
When using Docker Compose, you can define the environment variables in your docker-compose.yaml
file:
services:
open-webui:
environment:
- 'OPENAI_API_BASE_URLS=${OPENAI_API_BASE_URLS}'
- 'OPENAI_API_KEYS=${OPENAI_API_KEYS}'
Alternatively, you can define the values of these variables in an .env
file, placed in the same directory as the docker-compose.yaml
file:
OPENAI_API_BASE_URLS="https://api.openai.com/v1;https://api.groq.com/openai/v1;https://openai-cf.yourusername.workers.dev/v1" \
OPENAI_API_KEYS="sk-proj-ABCDEFGHIJK1234567890abcdef;gsk_1234567890abcdefabcdefghij;0123456789abcdef0123456789abcdef" \
By following these steps, you can easily integrate multiple OpenAI-compatible APIs with your Open WebUI instance, unlocking the full potential of these powerful AI models.
Conclusion
Open WebUI has opened up a whole new world of possibilities for me, allowing me to take control of my AI experiences and explore the vast array of OpenAI-compatible APIs out there. With the ability to seamlessly integrate multiple APIs, including OpenAI, Groq Cloud, and Cloudflare Workers AI, I've been able to unlock the full potential of these powerful AI models. By leveraging the flexibility of Open WebUI, I've been able to break free from the shackles of proprietary chat platforms and take my AI experiences to the next level. If you're tired of being limited by traditional chat platforms, I highly recommend giving Open WebUI a try and discovering the vast possibilities that await you.
Top comments (0)