DEV Community

Cover image for How to deploy QwQ 32B Preview in the Cloud?
Ayush kumar for NodeShift

Posted on

How to deploy QwQ 32B Preview in the Cloud?

Image description

QwQ-32B-Preview is an experimental research model developed by the Qwen Team, focused on advancing AI reasoning capabilities.

The QwQ-32B-Preview demonstrates strong math and coding abilities but faces challenges like language mixing, recursive reasoning loops, and gaps in common sense understanding.

The QwQ-32B-Preview is a 32.5B-parameter causal language model featuring RoPE, SwiGLU, RMSNorm, and QKV-biased attention. With 64 layers and a GQA setup of 40 query heads and 8 key-value heads, it efficiently handles up to 32,768 tokens, excelling in pretraining and post-training tasks.

Prerequisites for deploy QwQ 32B Preview Model

Make sure you have the following:

  • GPUs: 1xRTXA6000 (for smooth execution).
  • Disk Space: 100 GB free.
  • RAM: 64 GB(48 Also works) but we use 64 for smooth execution
  • CPU: 64 Cores(48 Also works)but we use 64 for smooth execution

In this following configuration, you can run all the sizes of QwQ 32B Preview Model.

The QwQ-32B-Preview outperforms models like o1-mini, o1-Preview, Claude 3.5 Sonnet, and GPT4o in analytical depth, token context handling, and computational efficiency, setting a higher benchmark in math, coding, and large-context tasks.

Step-by-Step Process to deploy QwQ 32B Preview in the Cloud

For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.

Step 1: Sign Up and Set Up a NodeShift Cloud Account

Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.

Follow the account setup process and provide the necessary details and information.

Image description

Step 2: Create a GPU Node (Virtual Machine)

GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.

Image description

Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.

Step 3: Select a Model, Region, and Storage

In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.

Image description

We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.

Step 4: Select Authentication Method

There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.

Image description

Step 5: Choose an Image

Next, you will need to choose an image for your Virtual Machine. We will deploy QwQ 32B Preview on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install QwQ 32B Preview Model on your GPU Node.

Image description

After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.

Image description

Step 6: Virtual Machine Successfully Deployed

You will get visual confirmation that your node is up and running.

Image description

Step 7: Connect to GPUs using SSH

NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.

Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.

Image description

Image description

Now open your terminal and paste the proxy SSH IP or direct SSH IP.

Image description

Next, if you want to check the GPU details, run the command below:

nvidia-smi

Enter fullscreen mode Exit fullscreen mode

Image description

Step 8: Install Ollama

After completing the steps above, it’s time to download Ollama from the Ollama website.

Website Link: https://ollama.com/

Run the following command to install the Ollama:

curl -fsSL https://ollama.com/install.sh | sh

Enter fullscreen mode Exit fullscreen mode

Image description

After the installation process is complete, run the following command to see a list of available commands:

ollama
Enter fullscreen mode Exit fullscreen mode

Image description

Next, run the following command to host the Ollama so that it can be accessed and utilized efficiently.

ollama serve

Enter fullscreen mode Exit fullscreen mode

Image description

Step 9: Check the Sizes of QwQ Model

On the Ollama website, the QWQ model is available in five different sizes. We will pull and run all the sizes on our GPU virtual machine.

Image description

Link: https://ollama.com/library/qwq/tags

Image description

Step 10: Pull qwq:32b Model

To pull the qwq:32b model, run the following command:

ollama pull qwq:32b

Enter fullscreen mode Exit fullscreen mode

Image description

Step 11: Run qwq:32b Model

Now, you can run the model in the terminal using the following command and interact with your model:

ollama run qwq:32b

Enter fullscreen mode Exit fullscreen mode

Image description

Image description

Step 12: Pull qwq:32b-preview-fp16 Model

To pull the qwq:32b-preview-fp16 model, run the following command:

ollama pull qwq:32b-preview-fp16

Enter fullscreen mode Exit fullscreen mode

Image description

Step 13: Run qwq:32b-preview-fp16 Model

Now, you can run the model in the terminal using the following command and interact with your model:

ollama run qwq:32b-preview-fp16

Enter fullscreen mode Exit fullscreen mode

Image description

Step 14: Pull qwq:32b-preview q4 K M Model

To pull the qwq:32b-preview-q4_K_M model, run the following command:

ollama pull qwq:32b-preview q4 K M

Enter fullscreen mode Exit fullscreen mode

Image description

Step 15: Run qwq:32b-preview-q4_K_M Model

Now, you can run the model in the terminal using the following command and interact with your model:

ollama run qwq:32b preview-q4 K M

Enter fullscreen mode Exit fullscreen mode

Image description

Image description

Conclusion

The QwQ 32B Preview model is a groundbreaking model from Alibaba Qwen team that offers advanced capabilities to developers and researchers. By following this step-by-step guide, you can easily deploy QwQ 32B Preview on a cloud-based virtual machine using a GPU-powered setup from NodeShift to maximize its potential. NodeShift provides a user-friendly, secure, and cost-effective platform to run your models efficiently. It’s an ideal choice for those exploring QwQ 32B Preview and other cutting-edge models.

For more information about NodeShift:

Website
Docs
LinkedIn
X
Discord
daily.dev

Top comments (0)