PaliGemma 2 is a state-of-the-art vision-language model developed by Google, combining advanced capabilities in image and text processing. It features a Transformer decoder and a Vision Transformer image encoder, enabling it to excel in tasks like image captioning, visual question answering, object detection, and segmentation. With fine-tuned configurations using the DOCCI dataset and leveraging the Gemma 2 language models and SigLIP vision models, PaliGemma 2 supports multilingual input and output for diverse applications. Designed for research purposes, the model is available in bfloat16 format and follows PaLI-3 training principles for superior performance on complex vision-language tasks.
PaliGemma 2 is trained on a diverse set of multilingual and multimodal datasets, including WebLI, CC3M-35L, OpenImages, and WIT, ensuring its capabilities in visual understanding, object localization, and multilingual tasks. The training process includes rigorous data responsibility filtering, such as removing unsafe or toxic content and sensitive personal information, to prioritize safety, privacy, and quality in its applications.
Prerequisites for Installing Google PaliGemma 2 Locally
Make sure you have the following:
- GPUs: 1xH100 SXM (for smooth execution).
- Disk Space: 100 GB free.
- RAM: 64+ GB
- CPU: 64+ Cores
Step-by-Step Process to Install Google PaliGemma 2 Model Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Access model from Hugging Face
Link: https://huggingface.co/collections/google/paligemma-2-release-67500e1e1dbfdd4dee27ba48
You need to agree to share your contact information to access this model. Fill in all the mandatory details, such as your name and email, and then wait for approval from Hugging Face and Google to gain access and use the model.
You will be granted access to this model within an hour, provided you have filled in all the details correctly.
Step 2: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 3: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 4: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x H100 SXM GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 5: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 6: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Google PaliGemma 2 on a Jupyter Virtual Machine. This open-source platform will allow you to install and run the Google PaliGemma 2 Model on your GPU node. By running this model on a Jupyter Notebook, we avoid using the terminal, simplifying the process and reducing the setup time. This allows you to configure the model in just a few steps and minutes.
Note: NodeShift provides multiple image template options, such as TensorFlow, PyTorch, NVIDIA CUDA, Deepo, Whisper ASR Webservice, and Jupyter Notebook. With these options, you don’t need to install additional libraries or packages to run Jupyter Notebook. You can start Jupyter Notebook in just a few simple clicks.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 7: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 8: Connect to Jupyter Notebook
Once your GPU VM deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ Button in the top right corner.
After clicking the ‘Connect’ button, you can view the Jupyter Notebook.
Now open Python 3(pykernel) Notebook.
Next, If you want to check the GPU details, run the command in the Jupyter Notebook cell:
!nvidia-smi
Step 9: Install Torch and Other Libraries
Run the following command to install the torch and other libraries:
pip install torch torchvision torchaudio einops timm pillow
Step 10: Install Transformers
Run the following command to install the transformers:
pip install git+https://github.com/huggingface/transformers
Step 11: Install Accelerate
Run the following command to install the accelerate:
pip install git+https://github.com/huggingface/accelerate
Step 12: Install Diffusers
Run the following command to install the diffusers:
pip install git+https://github.com/huggingface/diffusers
Step 13: Install Huggingface Hub
Run the following command to install the Huggingface hub:
pip install huggingface_hub
Step 14: Install Other Libraries
Run the following command to install the other libraries:
pip install sentencepiece bitsandbytes protobuf decord
Step 15: Login Using Your API Token
Use the huggingface_hub library to log in directly in the notebook:
from huggingface_hub import login
# Replace 'your_api_token_here' with your Hugging Face token
login(token="your hugging face token")
This will store your token securely for the session and allow authenticated access to Hugging Face models.
How to Generate a Hugging Face Token
- Create an Account: Go to the Hugging Face website and sign up for an account if you don’t already have one.
- Access Settings: After logging in, click on your profile photo in the top right corner and select “Settings.”
- Navigate to Access Tokens: In the settings menu, find and click on the “Access Tokens” tab.
- Generate a New Token: Click the “New token” button, provide a name for your token, and choose a role (either read or write).
- Generate and Copy Token: Click the “Generate a token” button. Your new token will appear; click “Show” to view it and copy it for use in your applications.
- Secure Your Token: Ensure you keep your token secure and do not expose it in public code repositories.
Step 16: Test Authentication
After logging in, test access by trying to load the model:
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
model_id = "google/paligemma2-3b-ft-docci-448"
# Load the model and processor
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
print("Model and processor loaded successfully!")
Step 17: Example Usage and Enter Prompt
You need to provide both images and text to the processor.
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import torch
# Initialize the model and processor
model_id = "google/paligemma2-3b-ft-docci-448"
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id).to("cuda")
processor = AutoProcessor.from_pretrained(model_id)
# Load an image (replace with your image path or URL)
image_path = "path/to/your/image.jpg"
image = Image.open(image_path)
# Define your text input
input_text = "Describe the scene in the image."
# Process the inputs
inputs = processor(images=image, text=input_text, return_tensors="pt").to("cuda")
# Generate output
outputs = model.generate(**inputs)
# Decode the output
generated_text = processor.decode(outputs[0], skip_special_tokens=True)
print("Generated Text:", generated_text)
Conclusion
Google PaliGemma 2 is a groundbreaking open-source model from Google that brings state-of-the-art AI capabilities to developers and researchers. Following this guide, you can quickly deploy Google PaliGemma 2 on a GPU-powered Virtual Machine with NodeShift, harnessing its full potential. NodeShift provides an accessible, secure, affordable platform to run your AI models efficiently. It is an excellent choice for those experimenting with Google PaliGemma 2 and other cutting-edge AI tools.
Top comments (0)