DEV Community

Cover image for LLMware.ai πŸ€–: An Ultimate Python Toolkit for Building LLM Apps
Rohan Sharma
Rohan Sharma

Posted on • Edited on

LLMware.ai πŸ€–: An Ultimate Python Toolkit for Building LLM Apps

Welcome to the world of AI πŸš€! According to John McCarthy (father of AI)

"Artificial intelligence (AI) is the science and engineering of making intelligent machines, especially intelligent computer programs".

As a developer, you're always on the lookout for tools that make your life easier, faster, and more efficient. Today, I came up with something interesting that will awaken your interest, too!!! πŸ˜‰

3️⃣... 2️⃣... 1️⃣... 🏹

Introducing LLWare.ai πŸ€–

llmware is an integrated framework with over 50+ small, specialized, open-source models for quickly developing LLM-based applications including Retrieval Augmented Generation (RAG) and Multi-Step Orchestration of Agent Workflows.

Here's how LLMWare defines itself,

llmware provides a unified framework for building LLM-based applications (e.g., RAG, Agents), using small, specialized models that can be deployed privately, integrated with enterprise knowledge sources safely and securely, and cost-effectively tuned and adapted for any business process.

LLMWare specifically focuses on making it easy to integrate open-source small specialized models and connecting enterprise knowledge safely and securely.

llmware components image

LLMWare Main Components

Β 

LLMWare Github Repo: https://github.com/llmware-ai/llmware (star it ⭐)
LLMWare Official Website: https://llmware.ai/

Β 

How to Install LLMWare.ai βš’οΈ

Using pip install: the developer's choice

Pip makes the installation easier, just write the below code in your terminal:



pip3 install llmware


Enter fullscreen mode Exit fullscreen mode

Cloning the Repo



git clone git@github.com:llmware-ai/llmware.git


Enter fullscreen mode Exit fullscreen mode

After cloning the repo, llmware provides a short β€˜welcome to llmware’ automation script, which can be used to install the projects requirements.

  • for Windows users ```

.\welcome_to_llmware_windows.sh

- for Mac users
Enter fullscreen mode Exit fullscreen mode

sh ./welcome_to_llmware.sh

Alternatively, if you prefer to complete the setup without the welcome automation script, then the next steps include:
1. install requirements.txt
2. install requirements_extras.txt
3. run examples
4. install vector db
5. Pytorch 2.3 note
6. Numpy 2.0 note

You can read about the installation steps by directly clicking on the link: [LLMWare Setup Guidelines](https://github.com/llmware-ai/llmware?tab=readme-ov-file#%EF%B8%8F-working-with-the-llmware-github-repository)

 

## A Quick Project with LLMWare πŸ§‘β€πŸ’»
Before explaining other stuff about llmware, let's witness its power first. Go through this quick project and get to know how llmware exactly works.

We are going to create a **GGUF Streaming Chatbot** using llmware and streamlit:
- Locally deployed chatbot using leading open-source chat models, including Phi-3-GGUF
- Uses Streamlit
- Core simple framework of ~20 lines using llmware and Streamlit.
- run without Wifi

### So let's start 🟩:
1️⃣ Install the `llmware` as explained above. Or simply run this code in the terminal:
Enter fullscreen mode Exit fullscreen mode

pip3 install llmware

2️⃣ Install the `streamlit` by running the below code in the terminal again:
Enter fullscreen mode Exit fullscreen mode

pip3 install streamlit

3️⃣ Make a Python file, let's say ggfu_streaming_chatbot.py, and paste the below code:
```py


import streamlit as st
from llmware.models import ModelCatalog
from llmware.gguf_configs import GGUFConfigs

GGUFConfigs().set_config("max_output_tokens", 500)


def simple_chat_ui_app (model_name):

    st.title(f"Simple Chat with {model_name}")

    model = ModelCatalog().load_model(model_name, temperature=0.3, sample=True, max_output=450)

    # initialize chat history
    if "messages" not in st.session_state:
        st.session_state.messages = []

    # display chat messages from history on app rerun
    for message in st.session_state.messages:
        with st.chat_message(message["role"]):
            st.markdown(message["content"])

    # accept user input
    prompt = st.chat_input("Say something")
    if prompt:

        with st.chat_message("user"):
            st.markdown(prompt)

        with st.chat_message("assistant"):

            #   note that the st.write_stream method consumes a generator - so pass model.stream(prompt) directly
            bot_response = st.write_stream(model.stream(prompt))

        st.session_state.messages.append({"role": "user", "content": prompt})
        st.session_state.messages.append({"role": "assistant", "content": bot_response})

    return 0


if __name__ == "__main__":
    #   note: will take a minute for the first time it is downloaded and cached locally

    chat_model = "phi-3-gguf"

    model_name = chat_model

    simple_chat_ui_app(model_name)


Enter fullscreen mode Exit fullscreen mode

4️⃣ Move to the terminal again and run the below code to run the application:



streamlit run ggfu_streaming_chatbot.py


Enter fullscreen mode Exit fullscreen mode

Output πŸ“ƒ

output

Example Search to Test the Project

Β 

Now you might be wondering, what's just happened right now! I know-I know that's harder to digest. You may have many questions but adding all the things in the blog makes it long and a bit confusing. But wait! I have that explanation part. Kindly go through the video once:

Supported Vector Databases πŸ›…

Integrate easily with vector databases for production-grade embedding capabilities.
LLMWare supports: FAISS, Milvus, MongoDB Atlas, Pinecone, Postgres (PG Vector), Qdrant, Redis, Neo4j, LanceDB, and Chroma.

Just for example, if you want to use vectors like SQLite3 and ChromaDB (File-based) where no install is required, you can use the below code:



from llmware.configs import LLMWareConfig 
LLMWareConfig().set_active_db("sqlite")   
LLMWareConfig().set_vector_db("chromadb")


Enter fullscreen mode Exit fullscreen mode

That's it! You just need to make this much effort and you'll be good to go! πŸ₯°

Β 

LLMWare Hugging Face Models πŸ«‚

Hugging Face Models

Hugging Face Models List

Β 

You can explore the models on the official hugging face site, Find the link here: LLMWare Hugging Face Models

Β 

LLMWare as Closed-Source 🐍

Till now I have discussed the llmware open-sourced version, but that's not the limitation! llmware also provides a paid version which you can use to save a lot of time for your enterprise. LLMWare provides small models, you can call them SLMs (Small Language Models) which are way better than LLMs, as no or less GPU is required to run these models.

closed-source

Features Provided in Closed-Source Version

Β 

Moving to the end... πŸ₯Ή

Thanks for reading it patiently. I appreciate your patience and love for me. Before thanking you note, I want to brief this awesome product.

LLMWare is the ultimate toolkit for building LLM apps - No GPU Required. That's the most interesting thing about llmware. Also, the closed source version of the llmware provides you with all the things an enterprise may need in one place (of course, I'm talking about AI features and handling).

If you still have any questions, drop it in the comment section. Alternatively, you can join the LLMWare Official Discord Channel by following this link: https://discord.com/invite/fCztJQeV7J

Here's the link to get all the example use cases of LLMWare OS version: https://llmware-ai.github.io/llmware/examples

That's all for today. Thanks for your time. You're amazing! Have a good day. πŸ’

Star LLMWare on Github ⭐

Top comments (14)

Collapse
 
vortico profile image
Vortico

Hey, great post! We really enjoyed it. You might be interested in knowing how to productionalise ML models with a simple line of code. If so, please have a look at flama for Python. We introduced some time ago a post Introducing Flama for Robust ML APIs. If you have any doubts, or you'd like to learn more about it and how it works in more detail, don't hesitate to give us a shout. And if you like it, please gift us a star ⭐ here.

Collapse
 
rohan_sharma profile image
Rohan Sharma

I'm glad you liked it! Will read about Flama soon and provide the feedback! Star done! πŸ’«

By the way, docs are great!

Collapse
 
komsenapati profile image
K Om Senapati

Ok now create one for rag and another explaining about models

Collapse
 
rohan_sharma profile image
Rohan Sharma

Coming soon! πŸ₯°

Collapse
 
jennie_py profile image
Priya Yadav

great !! keep growing Rohan,

Collapse
 
rohan_sharma profile image
Rohan Sharma

Thank you Priya! I hope you liked the blog πŸ₯Ή

Collapse
 
jennie_py profile image
Priya Yadav

Yes

Collapse
 
keerthi5465 profile image
K. keerthi

greattttttttttttt!

Collapse
 
rohan_sharma profile image
Rohan Sharma

Thank you Keerthi! I hope you liked it πŸ₯°

Collapse
 
niharikaa profile image
Niharika Goulikar

Nice article!

Collapse
 
rohan_sharma profile image
Rohan Sharma

I'm glad you liked it!

Collapse
 
rohan_sharma profile image
Rohan Sharma • Edited

Will be happy to hear your comments! And don't forget to check out llmware.ai/

Collapse
 
jennie_py profile image
Priya Yadav

Nice ...rohan

Some comments may only be visible to logged-in visitors. Sign in to view all comments.