DEV Community

Cover image for Building on the Edge: A How-To Guide on Object Detection with Edge Impulse
Olorundara Akojede (dvrvsimi)
Olorundara Akojede (dvrvsimi)

Posted on

Building on the Edge: A How-To Guide on Object Detection with Edge Impulse

In the fast-evolving landscape of technology, edge devices have emerged as the unsung heroes, bringing innovation closer to humans than ever before. From smart homes that make living easier to self-driving cars, these gadgets have redefined convenience and connectivity in ways that we could have never conceived.

At the heart of this disruptive transformation lies one of the powerful fields - object detection – a dynamic field of machine learning that equips machines with the ability to understand and infer from their visual surroundings.

This article talks about how you can build your own object detection model - a simple face mask detection project on Edge Impulse.

Prerequisite Knowledge:
Readers should have basic knowledge of machine learning concepts. Some understanding of IOTs is an added advantage.

Requirements:
You would need:

  • an Edge Impulse account
  • a Computer
  • a Kaggle account (optional)
  • a Phone (optional)

Introduction to Edge Computing

As you may already know, edge computing brings machine learning capabilities directly to edge devices, enabling real-time processing and decision-making without relying on the cloud, this means that your data stays with you in most cases. Unlike the traditional machine learning cycle, building for edge devices requires more iterative processes before they can be optimally deployed. Certain trade-offs have to be made depending on the the model's intended use and it is usually a contest between the model's size/compute time versus the model's performance.

There are various optimization techniques that are adopted for edge deployment and for compressing learning algorithms in general, some of them include:

  • Quantization: reduces the precision of the models parameters from 32-bit floating point values to 8-bit (int8) values without compromising the accuracy.
  • Pruning: reduces the size of decision trees by removing sections of the tree that are non-critical and redundant.
  • Distillation: transfers knowledge from a large model (teacher) to a smaller one(student).
  • Decomposition: uses smaller matrices or vectors in place of larger matrices while still retaining as much information as the original matrix.

See this article that extensively explains the above concepts, you can also check out this presentation by Kenechi, it simplifies the above listed techniques and a couple of other relevant information.

What is Edge Impulse?

Edge Impulse is a leading dev platform for machine learning on edge devices. It allows you to carry out everything from data collection to deployment and everything in between.
Whether you are dev looking to build models for deployment on your development board or an enterprise looking to deploy accurate AI solutions faster on literally any edge device, Edge Impulse has got you covered!

You can get around the platform easily and build projects in simple steps - build your dataset, train your model, test your model, and deploy it!
If you have done some form of ML before, these steps should be familiar to you.

Getting Started with Edge Impulse

Head over to https://edgeimpulse.com and create a free Developer account. Choose the Enterprise account if you intend to use your account for business purposes, it offers perks for .

Next, click the + Create new project button from the dropdown on the top-right of the screen and name your project.
edge impulse - create new project

Acquiring Data

To build a model that can tell when it sees a face mask, you need to show it images of varieties of face masks. If you have a face mask and would like to create your own dataset from scratch or you would rather use a public dataset to train your model, Edge Impulse provides data acquisition channels for both options.

Scroll down on the Dashboard page and click on the channel you'd like to use. Depending on the application, it is advisable to collect data with both channels to improve your model accuracy.

edge impulse - dataset

When adding existing data, you can either upload the data (as single units or as folder) or add a storage bucket. You can find and download face mask datasets here.

To collect your own data, you have 3 options; your computer, your phone, and/or a development board. To use your phone, simply scan the QR code to load the data acquisition page on your mobile device, you can proceed to collect data with your phone's camera.

Note that you can only use your back camera.

See the list of Edge Impulse supported dev boards

Preparing Data

Once you start uploading or collecting data, you would see a pop up asking if you want to build an object detection project. Scroll down to Project info on your Dashboard and change the Labeling method to Bounding boxes (object detection).

edge impulse - bounding box
After changing the Labeling method, you would notice a new section called Labeling queue with a number in parenthesis, that number represents the number of images collected or uploaded.

Just like every drag and release operation you've done with your mouse/trackpad, select the region that contains the face mask and click on Save labels to save that entry. Repeat this process for all your samples.

You may decide that you don't want to train your model with some samples, you can easily click on Delete sample to remove them as you go through your dataset.

labeling

Still on the Labeling queue page, you would notice a dropdown menu called Label suggestions. If you were detecting common objects like cars, animals, or even faces, you can switch to the YOLOv5, it saves you the stress of having to draw bounding boxes manually. Track objects between frames works best if you collected the series of data in the same instance, it just finds patterns and predicts where the bounding box should be in the next sample.

Creating an Impulse

On the left menu, click on Impulse design, this is where you create the pipeline from input to processing to output.

creating an impulse
Click Add an input block and add Images, you can change the Resize mode however you see fit. For Add a processing block, add Image and finally, add the Object detection(Images) learning block authored by Edge Impulse, be sure to tick the Image box. Click Save Impulse to save your configuration.

saving impulse

Generating Features

For each block you add, a new option is created under Impulse design. Click Image to explore how you can generate features from the samples.

It is advisable to change your colour depth from RBG to Grayscale, it significantly improves your model accuracy if you decide to use FOMO which would be covered in later sections of this article.

saving parameters

Click on Save parameters to go to the Generate features section and click Generate features.

features generation

Once you see the Job completed status, hover back to the left menu and click on Object detection. It's finally time to train your model!

Training your model.

Tweak the hyperparameters to your preference and choose your model.

hyperparams

About model selection, you can see a mini info card on each model when you click on Choose a different model. MobileNetV2 should do just fine for this use case but if you read through their info cards, you'd deduce that FOMO is a better option.

Fear Of Missing Out?

Anyone who hears or reads it for the first time would naturally think the same but FOMO(Faster Objects, More Objects) is a novel solution developed at Edge Impulse for more optimal object detection applications on highly constrained devices.

Apart from its negligible size, it is also very fast and can work on a wide range of boards. Check the documentation on FOMO here to know more and see how people are using it in interesting ways.

To use FOMO, ensure that you set the learning rate to 0.001 before training your model.

When you have adjusted the hyperparams and selected the model of your choice, click on Start training and wait for training completion.

Evaluating your model

After training, you should see an interface like the one below, it is a summary of the model's performance with the train data. Let's evaluate the model against new data.
performance

Notice how you can switch the model's version from int8 to float32.

On the left menu, click on Model testing to evaluate your model with your test data (remember that if you didn't specify train to test ratio, Edge Impulse automatically splits the data into 80:20). Click on Classify all and wait for the result.

test data
If you used a diverse dataset, your number should be higher the one in the image above. Diverse in the sense that your model should learn from face mask images in different colour, lightings, backgrounds, and angles. You probably want to include people with different skin colour so you don't end up with a bias model.

Inferencing

Now, you would perform face mask detection in real time. Go back to your Dashboard, click on the button that says Launch in browser, it is just below the QR code. Ensure that your computer has a webcam or an external one attached, if it doesn't, you can scan the code with your phone camera and switch to Classification mode.

On the newly opened tab, a mini web app for inferencing would be created. When the build is complete, you would be asked to grant access to your webcam. Happy inferencing!

What next?

You have learned how to build a simple object detection model on Edge Impulse but it gets more interesting when you start integrating hardware.

A good application of this project would be to create an automatic door system that won't open for a person that isn't putting on a face mask. On an Arduino board, the .ino *code would be something like this:

softmax_threshold = 0.6 // depending on your model performance
if (softmax_prediction > softmax_threshold) {
    Serial.println("Welcome");
    digitalWrite(door_gear, HIGH);
  }
  else {
    Serial.println("No entry without face mask")
    digitalWrite(door_gear, LOW);
Enter fullscreen mode Exit fullscreen mode

*this is nowhere near complete or accurate and is just meant to give you an idea. An ideal code would take into account the #include for the required drivers and #define for the individual hardware pin configurations but that is outside the scope of this article.

Continue exploring Edge Impulse's capabilities by building more advanced models and trying your hands on hardware, the documentation is all there for you.

You can also explore other machine learning applications for edge computing.

Until next time, TschΓΌss!

Top comments (0)