DEV Community

Cover image for Implementing a Perceptron from Scratch in Python
Daniel Azevedo
Daniel Azevedo

Posted on

Implementing a Perceptron from Scratch in Python

Hi devs,

The Perceptron is one of the simplest and most fundamental concepts in machine learning. It’s a binary linear classifier that forms the basis of neural networks. In this post, I'll walk through the steps to understand and implement a Perceptron from scratch in Python.

Let's dive in!


What is a Perceptron?

A Perceptron is a basic algorithm for supervised learning of binary classifiers. Given input features, the Perceptron learns weights that help separate classes based on a simple threshold function. Here’s how it works in simple terms:

  1. Input: A vector of features (e.g., [x1, x2]).
  2. Weights: Each input feature has a weight, which the model adjusts based on how well the model is performing.
  3. Activation Function: Computes the weighted sum of the input features and applies a threshold to decide if the result belongs to one class or the other.

Mathematically, it looks like this:

f(x) = w1*x1 + w2*x2 + ... + wn*xn + b

Where:

  • f(x) is the output,
  • w represents weights,
  • x represents input features, and
  • b is the bias term.

If f(x) is greater than or equal to a threshold, the output is class 1; otherwise, it’s class 0.


Step 1: Import Libraries

We’ll use only NumPy here for matrix operations to keep things lightweight.

import numpy as np
Enter fullscreen mode Exit fullscreen mode

Step 2: Define the Perceptron Class

We’ll build the Perceptron as a class to keep everything organized. The class will include methods for training and prediction.

class Perceptron:
    def __init__(self, learning_rate=0.01, epochs=1000):
        self.learning_rate = learning_rate
        self.epochs = epochs
        self.weights = None
        self.bias = None

    def fit(self, X, y):
        # Number of samples and features
        n_samples, n_features = X.shape

        # Initialize weights and bias
        self.weights = np.zeros(n_features)
        self.bias = 0

        # Training
        for _ in range(self.epochs):
            for idx, x_i in enumerate(X):
                # Calculate linear output
                linear_output = np.dot(x_i, self.weights) + self.bias
                # Apply step function
                y_predicted = self._step_function(linear_output)

                # Update weights and bias if there is a misclassification
                if y[idx] != y_predicted:
                    update = self.learning_rate * (y[idx] - y_predicted)
                    self.weights += update * x_i
                    self.bias += update

    def predict(self, X):
        # Calculate linear output and apply step function
        linear_output = np.dot(X, self.weights) + self.bias
        y_predicted = self._step_function(linear_output)
        return y_predicted

    def _step_function(self, x):
        return np.where(x >= 0, 1, 0)
Enter fullscreen mode Exit fullscreen mode

In the code above:

  • fit: This method trains the model by adjusting weights and bias whenever it misclassifies a point.
  • predict: This method computes predictions on new data.
  • _step_function: This function applies a threshold to determine the output class.

Step 3: Prepare a Simple Dataset

We’ll use a small dataset to make it easy to visualize the output. Here’s a simple AND gate dataset:

# AND gate dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 0, 0, 1])  # Labels for AND gate
Enter fullscreen mode Exit fullscreen mode

Step 4: Train and Test the Perceptron

Now, let’s train the Perceptron and test its predictions.

# Initialize Perceptron
p = Perceptron(learning_rate=0.1, epochs=10)

# Train the model
p.fit(X, y)

# Test the model
print("Predictions:", p.predict(X))
Enter fullscreen mode Exit fullscreen mode

Expected output for AND gate:

Predictions: [0 0 0 1]
Enter fullscreen mode Exit fullscreen mode

Explanation of the Perceptron Learning Process

  1. Initialize Weights and Bias: At the start, weights are set to zero, which allows the model to start learning from scratch.
  2. Calculate Linear Output: For each data point, the Perceptron computes the weighted sum of the inputs plus the bias.
  3. Activation (Step Function): If the linear output is greater than or equal to zero, it assigns class 1; otherwise, it assigns class 0.
  4. Update Rule: If the prediction is incorrect, the model adjusts weights and bias in the direction that reduces the error. The update rule is given by: weights += learning_rate * (y_true - y_pred) * x

This makes the Perceptron update only for misclassified points, gradually pushing the model closer to the correct decision boundary.


Visualizing Decision Boundaries

Visualize the decision boundary after training. This is especially helpful if you’re working with more complex datasets. For now, we’ll keep things simple with the AND gate.


Extending to Multi-Layer Perceptrons (MLPs)

While the Perceptron is limited to linearly separable problems, it’s the foundation of more complex neural networks like Multi-Layer Perceptrons (MLPs). With MLPs, we add hidden layers and activation functions (like ReLU or Sigmoid) to solve non-linear problems.


Summary

The Perceptron is a straightforward but foundational machine learning algorithm. By understanding how it works and implementing it from scratch, we gain insights into the basics of machine learning and neural networks. The beauty of the Perceptron lies in its simplicity, making it a perfect starting point for anyone interested in AI.

Top comments (0)