DEV Community

Cover image for Machine Learning with PHP
Roberto B.
Roberto B.

Posted on • Edited on

Machine Learning with PHP

If you're interested in Machine Learning and PHP, Transformers PHP emerges as a game-changer, offering robust text processing capabilities within PHP environments.

Transformers PHP simplifies both text and image processing tasks by harnessing pre-trained transformer models. It enables seamless integration of NLP functionalities for text and supports image-related tasks such as classification and object detection within PHP applications.

Transformers PHP is an open-source project. You can find more information and the source code on the GitHub repository.

Transformers PHP boasts a range of powerful features designed to enhance text and image processing capabilities within PHP environments:

  • Transformer Architecture: Inspired by Vaswani et al.'s "Attention is All You Need," Transformers PHP leverages self-attention mechanisms for efficient text processing.
  • Natural Language Processing Applications: From translation to sentiment analysis, Transformers PHP caters to diverse NLP tasks with ease.
  • Image Applications: Transformers PHP supports image-related tasks such as classification and object detection within PHP applications.
  • Model Accessibility: Access a plethora of pre-trained models on platforms like Hugging Face, simplifying development without the need for extensive training.
  • Architecture Variety: Choose from architectures like BERT, GPT, or T5, each tailored for specific tasks, ensuring optimal performance. Transformers PHP bridges the gap between PHP and advanced NLP, offering developers unparalleled opportunities to implement AI-driven solutions.

Transformers PHP and the ONNX Runtime

The backbone of Transformers PHP lies in its integration with the ONNX Runtime, a high-performance AI engine designed to execute deep learning models efficiently. Utilizing the Foreign Function Interface (FFI) mechanism, Transformers PHP seamlessly connects with the ONNX Runtime, enabling lightning-fast execution of transformer models within PHP environments.

So, what exactly is the ONNX Runtime? At its core, ONNX (Open Neural Network Exchange) is an open format for representing deep learning models, fostering interoperability between various frameworks. The ONNX Runtime, developed by Microsoft, is a cross-platform, high-performance engine built specifically for ONNX models. It provides robust support for executing neural network models efficiently across different hardware platforms, including CPUs, GPUs, and specialized accelerators.

The integration of ONNX Runtime into Transformers PHP via the FFI mechanism brings several key benefits:

  • Performance: ONNX Runtime is optimized for speed and efficiency, ensuring rapid inference of transformer models within PHP applications. This translates to faster response times and improved overall performance, crucial for real-time or high-throughput applications.
  • Hardware Acceleration: Leveraging the capabilities of ONNX Runtime, Transformers PHP can harness hardware acceleration features available on modern CPUs and GPUs. This allows for parallelized computation and optimized resource utilization, further enhancing performance.
  • Interoperability: By adhering to the ONNX format, ONNX Runtime ensures compatibility with a wide range of deep learning frameworks, including PyTorch and TensorFlow. This interoperability facilitates seamless integration of transformer models trained in different frameworks into Transformers PHP applications.
  • Scalability: ONNX Runtime is designed to scale efficiently across diverse hardware configurations, from single CPUs to large-scale distributed systems. This scalability ensures that Transformers PHP can handle varying workloads and adapt to evolving performance requirements.

In summary, the integration of the ONNX Runtime with Transformers PHP via the FFI mechanism unlocks a world of possibilities for AI-driven applications within the PHP ecosystem. Developers can leverage the power and versatility of transformer models with confidence, knowing that they are backed by a high-performance AI engine capable of delivering exceptional results.

Start using Transformers PHP

Before to start using Transformers PHP, I would like to ask you to "Star" the GitHub repository and follow the author of this great library Kyrian.

You can start creating a new directory and enter into the new empty directory.

mkdir example-app
cd example-app
Enter fullscreen mode Exit fullscreen mode

You can install the package:

composer require codewithkyrian/transformers
Enter fullscreen mode Exit fullscreen mode

During the execution of the command, you will be asked whether to enable and run the composer plugin for ankane/onnxruntime package to download the ONNXRuntime binaries for PHP. My suggestion is to answer y (yes please):

Do you trust "codewithkyrian/onnxruntime-downloader-plugin" to execute code and wish to enable it now? (writes "allow-plugins" to composer.json) [y,n,d,?]
Enter fullscreen mode Exit fullscreen mode

In this way, composer will download and install all the dependencies in the vendor/ folder and will automatically download the ONNX runtime, so you have just to run composer require codewithkyrian/transformers

The ONNX Runtime Downloader plugin at the end, is very simple, it triggers automatically the download of the ONNX Runtime thanks to the ONNX Runtime PHP package.

First example: sentiment analysis

Once you have installed the package, you can begin using it. You can create a new PHP file in which you include the autoload file, instantiate the Transformers class, and then initialize the pipeline with the desired functionality.

<?php

// 001 requiring the autoload file from vendor
require './vendor/autoload.php';

// 002 importing the Transformers class
use Codewithkyrian\Transformers\Transformers;
// 003 importing the pipeline function
use function Codewithkyrian\Transformers\Pipelines\pipeline;

// 004 initializing the Transformers class setting the cache directory for models
Transformers::setup()->setCacheDir('./models')->apply();
// 005 initializing a pipeline for sentiment-analysis
$pipe = pipeline('sentiment-analysis');
// 006 setting the list of sentences to analyze
$feedbacks = [
    'The quality of tools in the PHP ecosystem has greatly improved in recent years',
    "Some developers don't like PHP as a programming language",
    'I appreciate Laravel as a framework',
    'Laravel is a framework that improves my productivity',
    'Using an outdated version of Laravel is not a good practice',
    'I love Laravel',
];
echo PHP_EOL.'⭐⭐⭐ SENTIMENT ANALYSIS ⭐⭐⭐'.PHP_EOL.PHP_EOL;
// 007 looping thrgouh the sentences
foreach ($feedbacks as $input) {
    // 008 calling the pipeline function
    $out = $pipe($input);
    // 009 using the output of the pipeline function
    $icon =
        $out['label'] === 'POSITIVE'
            ? ($out['score'] > 0.9997
                ? '🤩🤩🤩'
                : '😀😀  ')
            : '🙁    ';
    echo $icon.' '.$input.PHP_EOL;
}
echo PHP_EOL;

Enter fullscreen mode Exit fullscreen mode

Executig the PHP script with Transformers PHP for sentiment analysis

In the code sample:

  • 001 requiring the autoload file from vendor;
  • 002 importing the Transformers class;
  • 003 importing the pipeline function;
  • 004 initializing the Transformers class, setting the cache directory for models;
  • 005 initializing a pipeline for sentiment-analysis;
  • 006 setting the list of sentences to analyze;
  • 007 looping through the sentences;
  • 008 calling the pipeline function;
  • 009 using the output of the pipeline function.

In the example we are using the sentiment analysis thanks to this line:

$pipe = pipeline('sentiment-analysis');
Enter fullscreen mode Exit fullscreen mode

The pipeline() function has a mandatory parameter which is the task that defines which functionality will be used:

  • feature-extraction: feature extraction is a process in machine learning and signal processing where raw data is transformed into a set of meaningful features that can be used as input to a machine learning algorithm. These features are representations of specific characteristics or patterns present in the data that are relevant to the task at hand. Feature extraction helps to reduce the dimensionality of the data, focusing on the most important aspects and improving the performance of machine learning algorithms by providing them with more relevant and discriminative information. This process is commonly used in tasks such as image recognition, natural language processing, and audio signal processing.
  • sentiment-analysis: sentiment analysis is the process of determining and categorizing the emotional tone or sentiment expressed within a piece of text.
  • ner: NER stands for Named Entity Recognition, which is a natural language processing task that involves identifying and categorizing named entities within text into predefined categories such as names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc.
  • question-answering: question answering in machine learning is the task of automatically generating accurate responses to natural language questions posed by users based on a given context or knowledge base.
  • fill-mask: Fill-mask is a natural language processing task where a model is trained to predict a masked word or phrase in a sentence, often used in transformer-based language models like BERT for tasks such as text completion or filling in missing information.
  • summarization: summarization is the process of condensing a longer piece of text into a shorter version while retaining its key information and meaning.
  • translation: refers to the process of converting text from one language (xx) to another language (yy).
  • text-generation: Text generation is the automated process of producing coherent and contextually relevant textual content using machine learning models or algorithms.

This means that you can select one of the tasks mentioned above, and Transformers PHP will download (and cache) the appropriate model locally (according to the selected task). Once the model is downloaded (to the cache directory defined via the setCacheDir() method of the Transformer class), you can execute the script multiple times without an internet connection and without needing to call any APIs.

Another example: Image-to-Text Functionality

In addition to its robust text processing capabilities, Transformers PHP simplifies image-to-text processing by providing a straightforward interface for utilizing pre-trained models.
With just a few lines of code, you can generate text descriptions. Let's take a look at a basic example:

<?php

// 001 requiring the autoload file from vendor
require './vendor/autoload.php';

// 002 importing the Transformers class
use Codewithkyrian\Transformers\Transformers;

// 003 importing the pipeline function
use function Codewithkyrian\Transformers\Pipelines\pipeline;

// 004 initializing the Transformers class setting the cache directory for models
Transformers::setup()->setCacheDir('./models')->apply();
// 005 initializing a pipeline for image-to-text
$pipeline = pipeline('image-to-text');

// 006 executing the image to text task
$result = $pipeline(
'https://a.storyblok.com/f/165058/4758x3172/7b1727dcf9/tiffany-nutt-0clfreinppm-unsplash.jpg/m/800x1400:4010x3010'
);

echo $result[0]["generated_text"] . PHP_EOL;

Enter fullscreen mode Exit fullscreen mode

With this example the output is:

a bicycle is parked on a sidewalk near a wall
Enter fullscreen mode Exit fullscreen mode

This kind of feature is very useful for generating text descriptions of images for visually impaired users or for content generation for generating captions and in website development for increasing the SEO.

So, with Transformers PHP, you can effectively manage and generate textual content as well as process images, making it a versatile tool for various applications.

References

Top comments (4)

Collapse
 
selase profile image
selase

Nicely Written. The following will be of particular interest to me

  • What else can we do with this.
  • Can we use other models and how to swap them
  • Some introduction to hugging face and how to use it

Thank you.

Collapse
 
robertobutti profile image
Roberto B.

thank you for the feedback, there is room for an additional article.
But to answer you :

  • what else you can do, here there is a list of supported tasks : github.com/CodeWithKyrian/transfor...
  • you can use other models as a parameter of pipeline() function, example: $translator = pipeline('translation', 'Xenova/m2m100_418M');
Collapse
 
pesovski profile image
Ivica Pesovski

I get this when trying to execute:
Unable to open "./models/Xenova/vit-gpt2-image-captioning/config.json.part1" using mode "w": fopen(./models/Xenova/vit-gpt2-image-captioning/config.json.part1): Failed to open stream: No such file or directory

Understandably, the models folder is empty. Can you update the article with instructions where to find these models and how to import them?

Collapse
 
robertobutti profile image
Roberto B.

Hi @pesovski , thank you for the feedback.
It is a bit strange because (quoting the official README):

"By default, TransformersPHP automatically retrieves model weights (ONNX format) from the Hugging Face model hub when you first use a pipeline or pretrained model. " . Source github.com/CodeWithKyrian/transfor...

So, the model should be automatically downloaded at the first run.
But in any case I will add the instructions to manually download the models.

Thank you again for the feedback, really appreciated.

Roberto