DEV Community

Cover image for How to deploy Granite MOE 1B and 3B in the Cloud?
Ayush kumar for NodeShift

Posted on

How to deploy Granite MOE 1B and 3B in the Cloud?

Image description
IBM released Granite 3.0, and It includes General purpose, Language , Guardrails & Safety, Mixture-of-Experts(MoE), Accelerated inference and Granite Time Series Tiny Time Mixer models. All models are available under the Apache 2.0 License making them accessible to everyone.

Image description

IBM’s Granite 3.0 models are specifically designed to meet the evolving needs of today’s enterprises, offering a blend of high performance, tailored customization, and a commitment to ethical AI practices. The company emphasizes addressing key concerns often associated with advanced models from providers like OpenAI, Anthropic, Google, and Meta. This includes clear disclosure of training data and development processes, legal protection through indemnification, and granting full commercial usage rights. These efforts make it easier and safer for enterprises to adopt and integrate these models into their operations with confidence.

The release features a variety of models optimized for diverse applications, including Granite-3.0-8B-Instruct and Granite-3.0-2B-Instruct, which are tailored for enterprise-level AI tasks such as retrieval-augmented generation (RAG), complex reasoning, and code synthesis. These Granite models offer the flexibility to be fine-tuned with proprietary enterprise datasets, enabling organizations to achieve highly specialized AI performance while significantly reducing costs.

The Granite 3.0 suite from IBM offers a range of specialized AI models:

✅ General Purpose NLP Models: Granite 3.0 8B Instruct, Granite 3.0 2B Instruct, Granite 3.0 8B Base, and Granite 3.0 2B Base, designed for natural language understanding, text generation, and conversational AI.
✅ Safety and Compliance Models: Granite Guardian 3.0 8B and Granite Guardian 3.0 2B, focusing on AI governance, bias mitigation, and responsible AI implementations.
✅ Mixture-of-Experts (MoE) Models: Granite 3.0 3B-A800M Instruct, Granite 3.0 1B-A400M Instruct, Granite 3.0 3B-A800M Base, and Granite 3.0 1B-A400M Base, leveraging dynamic model routing to handle complex tasks efficiently and scale performance based on workload demands.

The full Granite 3.0 family includes:

➡️ General Purpose/Language: Granite 3.0 8B Instruct, Granite 3.0 2B Instruct, Granite 3.0 8B Base, Granite 3.0 2B Base
➡️ Guardrails & Safety: Granite Guardian 3.0 8B, Granite Guardian 3.0 2B
➡️ Mixture-of-Experts: Granite 3.0 3B-A800M Instruct, Granite 3.0 1B-A400M Instruct, Granite 3.0 3B-A800M Base, Granite 3.0 1B-A400M Base
➡️ Accelerated inference: Granite-3.0-8B-Instruct-Accelerator
Granite Time Series Tiny Time Mixer: TTM-R1, TTM-R2

Prerequisites for Your System:

Make sure you have the following:

👉 GPUs: RTXA6000 (for smooth execution).
👉 Disk Space: 70 GB free.
👉 RAM: At least 40 GB.

Step-by-Step Process to Deploy Granite MOE 1B and 3B on a Virtual Machine in the Cloud

For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.

Step 1: Sign Up and Set Up a NodeShift Cloud Account

Visit the NodeShift Platform and create an account. Once you've signed up, log into your account.

Follow the account setup process and provide the necessary details and information.
Image description

Step 2: Create a GPU Node (Virtual Machine)

GPU Nodes are NodeShift's GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Image description

Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.

Step 3: Select a Model, Region, and Storage

In the "GPU Nodes" tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
Image description
We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.

Step 4: Select Authentication Method

There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.

Image description

Step 5: Choose an Image

Next, you will need to choose an image for your Virtual Machine. We will deploy Granite MOE 1B and 3B on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install Granite MOE 1B and 3B on your GPU Node.
Image description
After choosing the image, click the 'Create' button, and your Virtual Machine will be deployed.
Image description

Step 6: Virtual Machine Successfully Deployed

You will get visual confirmation that your node is up and running.
Image description

Step 7: Connect to GPUs using SSH

NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.

Once your GPU Node deployment is successfully created and has reached the 'RUNNING' status, you can navigate to the page of your GPU Deployment Instance. Then, click the 'Connect' button in the top right corner.

Image description

Image description

Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Image description
Next, if you want to check the GPU details, run the command below:
nvidia-smi

Image description

Step 8: Install Granite MOE 1B and 3B

After completing the steps above, it's time to download Granite MOE 1B and 3B from the Ollama website.

Website Link: https://ollama.com/library/granite3-moe
Image description

Then run the following command to install the Ollama:
curl -fsSL https://ollama.com/install.sh | sh

Image description

After the installation process is complete, run the following command to see a list of available commands:

ollama

Image description

Next, run the following command to host the Granite MOE 1B and 3B model so that it can be accessed and utilized efficiently.

ollama serve

Image description

Step 9: Select 1B Size of the Model

On website select the 1B size of the model, first we will run 1B size:

Website Link: https://ollama.com/library/granite3-moe:1b

Image description

Step 10: Pull Granite MOE 1B Model

To pull the Granite MOE 1B Model, run the following command:
ollama pull granite3-moe:1b

Image description

Step 11: Run Granite MOE 1B Model

Now, you can run the model in the terminal using the following command and interact with your model:
ollama run granite3-moe:1b

Image description

Step 12: Select 3B Size of the Model

On website select the 3B size of the model, next we will run 3B size:

Image description

Step 13: Pull Granite MOE 3B Model

To pull the Granite MOE 3B Model, run the following command:

ollama pull granite3-moe:3b

Image description

Step 14: Run Granite MOE 3B Model

Now, you can run the model in the terminal using the following command and interact with your model:

ollama run granite3-moe:3b

Image description

Conclusion

Granite MOE 1B and 3B is a groundbreaking open-source model from IBM that brings state-of-the-art AI capabilities to developers and researchers. Following this step-by-step guide, you can quickly deploy Granite MOE 1B and 3B on a GPU-powered Virtual Machine with NodeShift, harnessing its full potential. NodeShift provides an accessible, secure, affordable platform to run your AI models efficiently. It is an excellent choice for those experimenting with Granite MOE 1B & 3B and other cutting-edge AI models.

For more information about NodeShift:

Website
Docs
LinkedIn
X
Discord
daily.dev

Top comments (0)