DEV Community

Cover image for Machine Learning - Dense Layer
Sandeep Balachandran
Sandeep Balachandran

Posted on • Edited on

Machine Learning - Dense Layer

Hey there,
This is most likely the fourth one in the series. How is your weekend going so far?

Lets start off with an intersting story

Lets say there is this boy namely Isaac Newton walking by an elephant camp. This is way before that apple tree incident by the way. He notice that all of the elephants weren’t being kept in cages or held by the use of chains.All that was holding them back from escaping the camp, was a small piece of rope tied to one of their legs . Him being curious approached the trainer asked why the elephants were just standing there and never tried to escape. The trainer replied

"when they are very young and much smaller we use the same size rope to tie them and, at that age, it’s enough to hold them. As they grow up, they are conditioned to believe they cannot break away. They believe the rope can still hold them, so they never try to break free"

No matter how much the world tries to hold you back, always continue with the belief that what you want to achieve is possible. Believing you can become successful is the most important step in actually achieving it.

Get an elephant first.

Main Content From Here

We allready created a model to convert Celsius to Fahrenheit,where we used a simple neural network to find the relation between degrees Celsius and degrees Fahrenheit. We can see that our network has a single dense layer.

details

But what exactly is a dense layer? In order to understand what a dense layer is,
let's create a slightly more complicated neural network that has

  • 3 inputs
  • 1 hidden layer with 2 units
  • An output layer with only a single unit.

details

Recall, that you can think of a neural network as a stack of layers, where each layer is made up of units. These units are also called neurons.The neurons in each layer can be connected to neurons in the following layer.
For example,

details

Here we can see this neuron in the hidden layer receives the data from all the inputs. Now, we can do the same with all the other neurons. Let's see what this will look like.

details

These types of layers are fully connected or dense layers. So when we use a dense layer in keras , we're simply stating that the neurons in that layer are fully connected to the neurons in the previous layer.

For example, to create there both neural networking keras, we simply have to use the following statements

details

But, what is the dense layer actually doing?
To understand what's going on, we have to take a quick look at the math.

details

Let's say we have three inputs to a model,x1, x2, and x3 and that a1 and a2 are the neurons in our hidden layer, and a3 is the neuron in our output layer. Since it's the last layer, it's also the result of the model. Remember a layer has map that's applied to internal variables in it. Well, the w's and the b's that you see here within the neurons are the internal variables, also known as weights and biases. It's their values that get adjusted during the training process to enable the model to best match the inputs to the outputs. So the w's and the b's are the weights and the biases of our model.

Here you can see the math going on in a dense layer.
For example, the output value of neuron a1 is calculated by multiplying the input x1 with weight w11, and then adding input x2 multiplied by weight w12, and then adding input x3 multiplied by w13, and then add the signal weight b1. Similarly, in order to calculate the output value a3, we multiply the result of a1 with weight w31, and then add the result of a2 multiplied by w32, and then add the single weight b3.

As I said about before, what happens in the training process is that these weights and biases are being tuned to the best possible values to be able to match the inputs to the outputs. What's important to note however, is that the math never changes. In other words, the training process only changes the w and b variables to be able to match the input to the output. When you start with machine learning, it may seem strange that this actually works, but that's conceptually how machine learning works. Let's now get back to our example converting Celsius to Fahrenheit.

details

With the single neuron, we only have one weight, w11 and b1 available to tune.

But you know what?
That's exactly what we needed to solve the Celsius and Fahrenheit problems since the formula to convert Celsius to Fahrenheit is the linear equation F =C * 1.8 + 32.

If weight w11 is set to 1.8, and b1 to 32, then we would have exactly the formula to solve this conversion problem.

In fact, when we look at the printout from the Colab, you can see that our weight and bias get tuned to the correct values to solve Celsius to Fahrenheit conversion.
details

Namely, the weight is tuned to close to 1.8, and the bias is 32.
When doing practical machine learning, we can never match the variable against the target algorithm like this.
How could we?

We don't even know the target algorithm.

That's why we use machine learning in the first place. Duhhh!!!
Otherwise, we would just hard code the algorithm as a function.
So let's look at how we would approach this problem in real life.

details

Without knowing the target algorithm, we would just give a model a number of layers and weights to tune. Then, we just hope the model will be able to figure out how to tune the model algorithm to match the input to the output.

As you can see in this example, it also successfully did so with this model having three layers and lots more neurons. Looking at the weights now, you can see that there's no direct mapping between them and the conversion formula.

In general, when we do machine learning, we typically just try different neural networks with different numbers of layers and neurons in a trial and error away, and see if the model is able to solve the problem during the the training phase

Lets make the intersting part in the next post.

Top comments (0)