DEV Community

Cover image for Filter Image Uploads According to their NSFW Score
Vincent
Vincent

Posted on • Edited on

Filter Image Uploads According to their NSFW Score

When developing and later administrating a web or mobile application that deals with lot of image uploads, shares, likes and so forth, you would want to setup a smart automated mechanism that moderate your users image, GIF or video uploads on the fly and act accordingly when NSFW content is being detected such as rejecting, flagging or even censoring (i.e. apply a blur filter to) the target picture or video frame being suspected.

Since images and user-generated content dominate the Internet today, filtering NSFW contents becomes an essential component of Web and mobile applications. NSFW is the acronym for not safe for work and refer to images or GIF frames that contain adult content, violence or gory details.

In the scale of the modern web, it is impractical to rely on a human operator to moderate each image upload one after one. Instead, Computer Vision, a field of Artificial Intelligence is used for such a task. Computers are now able to automatically classify NSFW image content with greater precision & speed.

In this post, you will learn how to make use of the PixLab API to detect and filter unwanted contents (GIF included) and make decisions based on the score number from PixLab to blur the image in question or delete it.

Table Of Contents

The PixLab API

PixLab Logo

PixLab is a Machine Learning SaaS platform which offer Computer Vision and Media Processing APIs either via a straightforward HTTP RESTful API or offline SDK via the SOD Embedded CV library. PixLab HTTP API feature set includes but not not limited to:

  • Over 130 Machine Vision & Media Processing API endpoints.
  • State-of-the-art document scan algorithms such as Passports, ID cards via the /docscan API endpoint, face detection (/facedetect), facial landmarks extraction (/facelandmarks), facial recognition (/facecompare), NSFW content analysis (/nsfw) and many more.
  • 1TB of media storage served from a highly available CDN over SSL.
  • On the fly image compression, encryption and tagging.
  • Proxy for AWS S3 and other cloud storage providers.

Invoking the NSFW Endpoint

According to the PixLab documentation. The purpose of NSFW is to detect not suitable for work (i.e. nudity & adult) content in a given image or video frame. NSFW is of particular interest, if mixed with some media processing API endpoints like blur, encrypt or mogrify to censor images on the fly according to their nsfw score. This can help the developer automate things such as filtering user's upload.

HTTP Methods

The NSFW endpoint support both GET and POST HTTP methods which mean that you can send either a direct link (public URL) to the target image to scan via GET or upload the image directly from your HTML form or Web app for example to the NSFW endpoint via POST. POST on the other side is more flexible and support two content types: multipart/form-data and application/json.

API Response

The NSFW API endpoint always return a JSON object (i.e. application/json) for each HTTP request whether successful or not. The following are the JSON fields returned in response body:

  • Status
  • Score
  • Error

The field of interest here is the score value. The more this value approaches 1, the more your picture is highly nsfw. In which case, you have to act accordingly such as rejecting the picture or better, apply a blur filter (see example below) to the picture in question.

Real World Code Sample

Given a freshly uploaded image, perform nudity & adult content detection at first and if the nsfw score is high enough, apply a blur filter on the target picture. A typical (highly nsfw) blurred image should look like the following after processing:

Censored NSFW Picture

Such blurred image was obtained via the following Python script:



import requests
import json

# Target Image: Change to any link (Possibly adult) you want or switch to POST if you want to upload your image directly, refer to the sample set for more info.
img = 'https://i.redd.it/oetdn9wc13by.jpg' 
# Your PixLab API key that you can obtain from https://pixlab.io/dashboard
key = 'PIXLAB_API_KEY'

# Censure an image based on its NSFW score
req = requests.get('https://api.pixlab.io/nsfw',params={'img':img,'key':key})
reply = req.json()
if reply['status'] != 200:
    print (reply['error'])
elif reply['score'] < 0.5 :
    print ("No adult content were detected on this picture")
else:
    # Highly NSFW picture
    print ("Censuring NSFW picture...")
    # Call blur with the highest possible radius and sigma
    req = requests.get('https://api.pixlab.io/blur',params={'img':img,'key':key,'rad':50,'sig':30})
    reply = req.json()
    if reply['status'] != 200:
        print (reply['error'])
    else:
        print ("Blurred Picture URL: "+ reply['link'])


Enter fullscreen mode Exit fullscreen mode
  • A PHP code sample is also available here.

The code above is self-explanatory and regardless of the programming language, the logic is always same. We made a simple HTTP GET request with the input image URL as a sole parameter. Most PixLab endpoints support multiple HTTP methods so you can easily switch to POST based requests if you want to upload your images & videos directly from your mobile or web app for anlaysis. Back to our sample, only two API endpoints are needed for our moderation task:

  1. First, NSFW is the analysis endpoint that must be called first. It does perform nudity & adult content detection and return a score value between 0..1. The more this value approaches 1, the more your picture/frame is highly nsfw.
  2. The Blur endpoint is called later only if the nsfw score value returned earlier is greater than certain threshold. In our case, it is set to 0.5. The blur endpoint return either a direct link (URL) to the blurred image which is stored on the PixLab storage cluster or on your own AWS S3 bucket if you connect your bucket (i.e. Access/Secret S3 key) from the PixLab dashboard. This feature should give full control over your processed (i.e. censored) images.

PHP/Python Gist

Conclusion

Surprisingly, automatically filtering unwanted contents is straightforward for the average web developer or site administrator who may lack technical machine learning skills thanks to open computer vision technologies such the one provided by PixLab. Find out more code samples on https://github.com/symisc/pixlab and https://github.com/symisc/sod if your are C/C++ developer.

Top comments (4)

Collapse
 
veselinastaneva profile image
Vesi Staneva

Thank you for sharing this post!👍

My team just completed an open-sourced Content Moderation Service built with Node.js, TensorFlowJS, and ReactJS that we have been working over the past weeks. We have now released the first part of a series of three tutorials - How to create an NSFW Image Classification REST API and we would love to hear your feedback(no ML experience needed to get it working). Any comments & suggestions are more than welcome. Thanks in advance!
(Fork it on GitHub or click🌟star to support us and stay connected🙌)

Collapse
 
unqlite_db profile image
Vincent

Hone grown solution like the one you built for moderation tasks is not recommended. You should rely in the scale of modern web on battle tested APIs such the one provided by PixLab for content moderation tasks (NSFW detection API).

Collapse
 
veselinastaneva profile image
Vesi Staneva

What are the pros and cons of each solution in your opinion?

Collapse
 
haggenmatt profile image
haggenmatt

That was a nice read!