DEV Community

Cover image for Facial Recognition Attendance System with Tensorflow.js
Anthony
Anthony

Posted on

Facial Recognition Attendance System with Tensorflow.js

Introduction

This article has been in my drafts for so long that I have decided to just write it. Around the year 2020, a friend tasked me with building a fingerprint attendance system where students could take attendance by scanning their fingers on a system. Long story short, COVID happened, and everyone became wary of touching shared surfaces. Consequently, the idea pivoted to a facial recognition system instead of fingerprint scanning. The only problem with this whole project was that I had no idea what I was doing. I had never built an attendance system, nor had I worked with a fingerprint scanner or a facial recognition system before.

Problem Statement

Generally, the way I tackle problems in life or programming is by breaking them down into bits. I start by asking myself the following questions: What is the problem? What is the solution? What are the required steps to achieve the proposed solution? Are there any blockers? Are there alternate solutions?

After analyzing the Facial Recognition Attendance System, I found out that at its core, we just needed it to do three things. Firstly, it should be able to recognize faces. Secondly, it should save these faces and assign a unique identifier to them. Lastly, it should compare these stored faces to keep track of registered faces and mark them as absent or present.

The Ability to Recognize Faces

This project was undertaken before ChatGPT was popular, so when I needed an answer to a particular problem, I would turn to my trusty old friends, Google and Stack Overflow. It was while I was browsing through these platforms that I stumbled upon a library called "TensorFlow.js." It was the answer to my prayers, making it easy as pie to work with machine learning models using JavaScript. But after playing with it for a while and coming to the realization that this might not be as easy as pie after all, I went back to the internet for an easier solution. It was then that I found it, staring right at me, waiting for me to come and pick it up. It whispered into my ears, "Anthony, I know you have been looking for me." Its name was face-api.js, and it was built on top of TensorFlow.js. As the name implied, it specialized in face detection and face recognition, and it was everything I needed to make this facial attendance system a reality.

Registration of faces

Looking at the code now, I feel like I need to edit some things because this thing is so ugly, but I actually don't have time. So, if you look at the code below and you feel I wrote rubbishโ€”you are right, but it works.

<template>
    <DefaultLayout>
        <div v-if="globalState.registerState.value == 0">
            <h4 id="alert" class="shadow-2xl w-auto mx-auto outline"></h4>
            <div class="web-cam">
                <video id="video" ref="video" width="500" height="500" autoplay muted class=""></video>
            </div>
            <div id="camera" ref="camera" class="h-auto w-auto text-center hidden"></div>
            <br>
            <p id="snapShot"></p>
        </div>
        <SavePerson v-else/>
    </DefaultLayout>
</template>

<script setup>
import SavePerson from '../components/SavePerson.vue'
import DefaultLayout from '../layout/defaultLayout.vue'
import {  onMounted, onUpdated } from 'vue'
import {mountWebcam} from '../composibles/useWebcam'
import {loadModels} from '../composibles/useFaceapi'
import {ScanFace} from '../composibles/useVideo'
import {globalState} from '../composibles/useState'


onUpdated(()=>{
    globalState.registerState.value = 0
})
onMounted(mountWebcam)
onMounted(loadModels)
onMounted(ScanFace)

</script>
Enter fullscreen mode Exit fullscreen mode

So let's go through what the above code does:

  • First, we "mountWebcam." This initializes the device's camera and ensures it's ready to capture your beautiful faces. link to file
  • Then, we "loadModels." For face-api.js to work, we need some relatively large models loaded first. Ideally, I cache these models in the browser the first time the page is viewed so subsequent times the page loads as fast as possible. link to file
  • Finally, we scan the user's face, and once that has been done, we show the saveFace component where we get the user's face and save it to the database, in this case, localStorage. link to file
export const ScanFace = () => {
    const video = document.querySelector('#video')
    let recog
    video.addEventListener('play', () => {
        const canvas = faceapi.createCanvasFromMedia(video)

        document.body.append(canvas)
        const displaySize = { width: video.width, height: video.height }
        faceapi.matchDimensions(canvas, displaySize)
        recog = setInterval(async () => {
            const count = globalState.capturedFaces.value.length
            try {
                const detections = await faceapi.detectSingleFace(video, new faceapi.TinyFaceDetectorOptions()).withFaceLandmarks().withFaceExpressions()
                const resizedDetections = faceapi.resizeResults(detections, displaySize)
                canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height)
                faceapi.draw.drawDetections(canvas, resizedDetections)
                faceapi.draw.drawFaceLandmarks(canvas, resizedDetections)
                if (resizedDetections) {
                    if(count < 3)saveFace(resizedDetections)
                    document.querySelector('#alert').style.color = 'green';
                    document.querySelector('#alert').innerHTML = `Face found, Captured ${count} out of 3`
                    if (count >= 3) {
                        await clearInterval(recog)
                        canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height)
                        globalState.registerState.value++
                    }
                } 
            } catch {
                if (count < 3) {
                    document.querySelector('#alert').style.color= 'red';
                    document.querySelector('#alert').innerHTML = `Can't find a Face, Please Adjust <br>  Captured ${count} out of 3`
                }

            }


        }, 1000)
    });   
} 
Enter fullscreen mode Exit fullscreen mode

The provided code defines a function called ScanFace which sets up face detection and tracking on a video feed using the face-api.js library. Here's a step-by-step explanation of what the code does:

  1. Initialization:

    • const video = document.querySelector('#video'): Selects the video element from the DOM where the video feed will be displayed.
    • let recog: Declares a variable recog which will be used to store the interval ID for the face detection loop.
  2. Event Listener for Video Play:

    • video.addEventListener('play', () => { ... }): Sets up an event listener that triggers when the video starts playing.
  3. Canvas Setup:

    • const canvas = faceapi.createCanvasFromMedia(video): Creates a canvas element from the video feed using the face-api.js library.
    • document.body.append(canvas): Appends the canvas to the body of the document.
    • const displaySize = { width: video.width, height: video.height }: Sets the display size of the canvas to match the video element.
    • faceapi.matchDimensions(canvas, displaySize): Matches the canvas dimensions to the display size.
  4. Face Detection Loop:

    • recog = setInterval(async () => { ... }, 1000): Sets up an interval that runs every second to perform face detection.
  5. Face Detection Logic:

    • const count = globalState.capturedFaces.value.length: Gets the current count of captured faces from the global state.
    • try { ... } catch { ... }: A try-catch block to handle face detection and update the UI based on the result.
  6. Face Detection Success:

    • const detections = await faceapi.detectSingleFace(video, new faceapi.TinyFaceDetectorOptions()).withFaceLandmarks().withFaceExpressions(): Detects a single face in the video feed along with its landmarks and expressions.
    • const resizedDetections = faceapi.resizeResults(detections, displaySize): Resizes the detection results to match the display size.
    • canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height): Clears the canvas.
    • faceapi.draw.drawDetections(canvas, resizedDetections): Draws the face detections on the canvas.
    • faceapi.draw.drawFaceLandmarks(canvas, resizedDetections): Draws the face landmarks on the canvas.
    • if (resizedDetections) { ... }: If a face is detected, performs the following actions:
      • If the count of captured faces is less than 3, calls saveFace(resizedDetections) to save the detected face.
      • Updates the alert message to indicate a face has been found and how many faces have been captured.
      • If the count of captured faces is 3 or more, stops the interval and clears the canvas, then increments the registerState value in the global state.
  7. Face Detection Failure:

    • If no face is detected, updates the alert message to indicate that a face cannot be found and suggests adjusting the position. It also shows how many faces have been captured so far.

In summary, this code continuously checks the video feed for a face, draws detection results on a canvas, and saves up to three detected faces. It provides user feedback through alert messages and stops once three faces have been successfully captured.

After three faces of the same user has been captured we then show the savePerson component and get more information about the current user like the name.

<template>
    <form class="grid place-items-center place-content-center mt-24 w-full"     @submit.prevent="save">
        <img :src="face.img" alt="userFace"  class="rounded-full w-72 h-72 border border-indigo-600 object-cover">
        <div class="border border-indigo-600 w-full p-5 max-w-[40rem] h-auto overflow-hidden md:text-lg rounded-md shadow-xl  flex flex-col justify-center items-center mt-5">
            Name: <input type="text" v-model="globalState.CapturedUserName.value" placeholder="Input your name" class="p-2 py-1 border-2 border-indigo-600 rounded bg-transparent z-50" required>
        </div>
        <div class="border border-indigo-600 w-full p-5 max-w-[40rem] h-auto overflow-hidden md:text-lg rounded-md shadow-xl  flex flex-col justify-center items-center mt-5">
            Face Certainty : {{face.detection}}
        </div>
        <div class="border border-indigo-600 w-full p-5 max-w-[40rem] h-auto overflow-hidden md:text-lg rounded-md shadow-xl  flex flex-col justify-center items-center mt-5">
            User's Mood : {{face.mood}}
        </div>
        <button
            type="submit"

            class="text-center w-full px-6 py-3 mt-3 text-lg text-white bg-indigo-600 rounded-md sm:mb-0 hover:bg-indigo-700 "
        >
            Register
        </button>
    </form>
</template>

<script setup>
import { highestDetection, globalState, saveCapturedUser } from '../composibles/useState';
import { useRouter } from 'vue-router';

const router = useRouter()
const face = highestDetection()
const save = async()=>{
    await saveCapturedUser()
    router.push('/')
}
</script>
Enter fullscreen mode Exit fullscreen mode

Taking attendance of registered users / faces

Now that we have successfully captured and saved a user, all that is left is to be able to recognize that face keep a timestamp next time the face is shown to the system. below is the code for the take attendance page link to the page

<template>
    <DefaultLayout>
        <div>
            <h4 id="alert" class="shadow-2xl w-auto mx-auto outline"></h4>
            <div class="web-cam flex justify-center items-center">
                <h2 class="mt-20 text-3xl absolute mx-auto">Loading...</h2>
                <video
                    id="video"
                    ref="video"
                    width="500"
                    height="500"
                    autoplay
                    muted
                    class="z-50"
                ></video>
            </div>
            <div
                id="camera"
                ref="camera"
                class="h-auto w-auto text-center hidden"
            ></div>
            <br />
            <p id="snapShot"></p>
        </div>
    </DefaultLayout>
</template>

<script setup>
import DefaultLayout from '../layout/defaultLayout.vue';
import { onMounted, onUpdated } from 'vue';
import { mountWebcam } from '../composibles/useWebcam';
import { loadModels } from '../composibles/useFaceapi';
import { SnapFace } from '../composibles/useAttendance';
import { globalState } from '../composibles/useState';

onMounted(() => {
    globalState.registerState.value = 0;
});
onMounted(mountWebcam);
onMounted(loadModels);
onMounted(SnapFace);
</script>
Enter fullscreen mode Exit fullscreen mode

If you noticed the above code is very similar to the original register face / user code with the only difference being that, instead of "scanFace" that we had in register, we have "SnapFace" function here link to file

export const SnapFace = () => {
    const router = useRouter()
    const video = document.querySelector('#video')
    let recog
    video.addEventListener('play', () => {
        const canvas = faceapi.createCanvasFromMedia(video)
        document.body.append(canvas)
        const displaySize = { width: video.width, height: video.height }
        faceapi.matchDimensions(canvas, displaySize)
        recog = setInterval(async () => {
            try {
                const detections = await faceapi.detectSingleFace(video, new faceapi.TinyFaceDetectorOptions()).withFaceLandmarks().withFaceExpressions()
                const resizedDetections = faceapi.resizeResults(detections, displaySize)
                canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height)
                faceapi.draw.drawDetections(canvas, resizedDetections)
                faceapi.draw.drawFaceLandmarks(canvas, resizedDetections)

                if (resizedDetections) {
                    if (resizedDetections.detection.score.toFixed(2) > 0.7) {
                        await clearInterval(recog)
                        canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height)
                        document.querySelector('#alert').style.color = 'green';
                        document.querySelector('#alert').innerHTML = 'Face found'
                        Webcam.snap(function (data_uri) {
                            snappedFace = data_uri
                        });
                        await scanImg(router)

                    } else {
                        document.querySelector('#alert').style.color= 'red';
                        document.querySelector('#alert').innerHTML = 'Can\'t find a Face, Please Adjust'

                    }
                }

            } catch (err) {
                console.log('something went wrong ====', err )
            }
        }, 1000)
    });   
}
Enter fullscreen mode Exit fullscreen mode

The SnapFace function uses the face-api.js library to detect faces in a video feed, capture an image of the detected face, and then trigger further actions such as scanning the image. Here's a detailed explanation of what the code does:

  1. Initialization:

    • const router = useRouter(): Initializes the router, likely for navigating to different routes in the application.
    • const video = document.querySelector('#video'): Selects the video element from the DOM where the video feed will be displayed.
    • let recog: Declares a variable recog which will be used to store the interval ID for the face detection loop.
  2. Event Listener for Video Play:

    • video.addEventListener('play', () => { ... }): Sets up an event listener that triggers when the video starts playing.
  3. Canvas Setup:

    • const canvas = faceapi.createCanvasFromMedia(video): Creates a canvas element from the video feed using the face-api.js library.
    • document.body.append(canvas): Appends the canvas to the body of the document.
    • const displaySize = { width: video.width, height: video.height }: Sets the display size of the canvas to match the video element.
    • faceapi.matchDimensions(canvas, displaySize): Matches the canvas dimensions to the display size.
  4. Face Detection Loop:

    • recog = setInterval(async () => { ... }, 1000): Sets up an interval that runs every second to perform face detection.
  5. Face Detection Logic:

    • try { ... } catch (err) { ... }: A try-catch block to handle face detection and log any errors.
  6. Face Detection Success:

    • const detections = await faceapi.detectSingleFace(video, new faceapi.TinyFaceDetectorOptions()).withFaceLandmarks().withFaceExpressions(): Detects a single face in the video feed along with its landmarks and expressions.
    • const resizedDetections = faceapi.resizeResults(detections, displaySize): Resizes the detection results to match the display size.
    • canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height): Clears the canvas.
    • faceapi.draw.drawDetections(canvas, resizedDetections): Draws the face detections on the canvas.
    • faceapi.draw.drawFaceLandmarks(canvas, resizedDetections): Draws the face landmarks on the canvas.
    • if (resizedDetections) { ... }: If a face is detected, performs the following actions:
      • if (resizedDetections.detection.score.toFixed(2) > 0.7): Checks if the detection score is greater than 0.7 (i.e., the detection confidence is high enough).
      • If the detection confidence is high:
      • Stops the interval and clears the canvas.
      • Updates the alert message to indicate a face has been found.
      • Captures an image of the detected face using the Webcam.snap function.
      • Calls the scanImg(router) function, which likely processes the captured image and triggers further actions.
  7. Face Detection Failure:

    • If the detection confidence is not high enough, updates the alert message to indicate that a face cannot be found and suggests adjusting the position.
  8. Error Handling:

    • Logs any errors that occur during the face detection process.

In summary, this code continuously checks the video feed for a face, draws detection results on a canvas, and captures an image of the face if the detection confidence is high. It provides user feedback through alert messages and triggers further processing of the captured image which in this case is calling the "scanImg".

Saving to localStorage or a database

const scanImg = async (router) => {

    const image = new Image()
    image.src = snappedFace

    // const image = snappedFace
    const container = document.createElement('div')
    container.style.position = 'relative'
    document.body.append(container)
    const labeledFaceDescriptors = await loadLabeledImages(router)

    const faceMatcher = new faceapi.FaceMatcher(labeledFaceDescriptors, 0.4)

    const canvas = await faceapi.createCanvasFromMedia(image)
    document.body.append(canvas)
    const displaySize = { width: 350, height: 265 }

    faceapi.matchDimensions(canvas, displaySize)
    const detection = await faceapi.detectAllFaces(image).withFaceLandmarks().withFaceDescriptors()
    if (detection.length < 1) {
        location.reload()
    } else { 
        const resizedDetections = faceapi.resizeResults(detection, displaySize)
        const results = resizedDetections.map((d) => faceMatcher.findBestMatch(d.descriptor))
        results.forEach((result, i) => {
            const box = resizedDetections[i].detection.box
            const drawBox = new faceapi.draw.DrawBox(box, { label: result.toString() })
            drawBox.draw(canvas)
            savedUsers.map(async (label, index) => { 
                if (label.name === result._label) {
                    savedUsers[index].date.push(`${new Date().toLocaleTimeString()} of ${new Date().toLocaleDateString()}`)
                    canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height)
                    useAlert().openAlert(`Attendance has been taken for ${result._label}`)
                    router.push('/attendanceSheet')
                } 
            })
            canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height)
            useAlert().openAlert('Unknown User Found, go and register')
        })
    }
}
Enter fullscreen mode Exit fullscreen mode

The scanImg function is designed to process the captured image, detect faces, match them against known faces, and take appropriate actions based on the results. Here's a detailed explanation of what the code does:

  1. Image Initialization:

    • const image = new Image(): Creates a new image object.
    • image.src = snappedFace: Sets the source of the image to the captured face image stored in snappedFace.
  2. Container Setup:

    • const container = document.createElement('div'): Creates a new div element.
    • container.style.position = 'relative': Sets the position style of the container to relative.
    • document.body.append(container): Appends the container to the body of the document.
  3. Load Labeled Images:

    • const labeledFaceDescriptors = await loadLabeledImages(router): Loads labeled face descriptors using the loadLabeledImages function, which presumably fetches known faces for matching.
  4. Face Matcher Initialization:

    • const faceMatcher = new faceapi.FaceMatcher(labeledFaceDescriptors, 0.4): Initializes a face matcher with the labeled face descriptors and a threshold of 0.4.
  5. Canvas Setup:

    • const canvas = await faceapi.createCanvasFromMedia(image): Creates a canvas from the image using face-api.js.
    • document.body.append(canvas): Appends the canvas to the body of the document.
    • const displaySize = { width: 350, height: 265 }: Sets the display size for the canvas.
    • faceapi.matchDimensions(canvas, displaySize): Matches the canvas dimensions to the display size.
  6. Face Detection:

    • const detection = await faceapi.detectAllFaces(image).withFaceLandmarks().withFaceDescriptors(): Detects all faces in the image along with their landmarks and descriptors.
  7. No Face Detected:

    • if (detection.length < 1) { location.reload() }: If no face is detected, reloads the page.
  8. Face Detected:

    • const resizedDetections = faceapi.resizeResults(detection, displaySize): Resizes the detection results to match the display size.
    • const results = resizedDetections.map((d) => faceMatcher.findBestMatch(d.descriptor)): Maps each resized detection to the best matching known face using the face matcher.
    • results.forEach((result, i) => { ... }): Iterates over the results to take appropriate actions based on the match results.
  9. Action Based on Match Results:

    • For each result:
      • const box = resizedDetections[i].detection.box: Gets the bounding box of the detected face.
      • const drawBox = new faceapi.draw.DrawBox(box, { label: result.toString() }): Creates a box with a label for drawing.
      • drawBox.draw(canvas): Draws the box on the canvas.
      • savedUsers.map(async (label, index) => { ... }): Maps over the saved users to check if the detected face matches any known user.
      • If a match is found:
        • Updates the attendance record for the matched user.
        • Clears the canvas.
        • Opens an alert indicating that attendance has been taken for the user.
        • Redirects to the attendance sheet page using the router.
      • If no match is found:
        • Clears the canvas.
        • Opens an alert indicating that an unknown user was found and suggests registration.

In summary, the scanImg function processes the captured image, detects faces, matches them against known faces, and updates the attendance records if a match is found. It also provides user feedback and handles navigation based on the detection results.

Viewing Attendance

Once the scanImg is done running it actually takes you to the view attendance page. you can also navigate to it from the homepage or entering /attendanceSheet in the url path

Image description

Conclusion

This article became way longer than I wanted it to be, but I had fun writing it. If you have any specific questions or are confused about any part of the above article, feel free to leave a comment below, and I will try my absolute best to help you out. Bye for now, till next time!

Links and Repo

Top comments (0)