The idea behind creating this new series of posts is to make the understanding of neural networks and artificial intelligence as democratic as possible.
So, initially, before discussing the structure of networks and all the "ugly" mathematics behind it, let's start by talking about the activation of a PERCETRON.
I shared the link from Stanford University that explains the perceptron, which, by the way, looks like a page from the 90s. When I did my middle school thesis, I made a much nicer hypertext. ;P
I want to provide a preamble on how we arrived at simulating a neuron at a digital level, which means that most of the time, scientific discoveries and engineering skills are inspired by things that already exist in nature, long before we even thought about them. We can think of the dream of flying by looking at birds, how we copied whales to design submarines, camouflage suits, and chameleons, and so on. We cannot deny that nature has been and will always be a major source of inspiration for the scientific discoveries we have made and those we will make in the future.
So, what do we mean by activation?
If we think about activation, we can think of it as a switch, like the one in our living room lamp, a button that turns the light bulb on and off. Going back to the example of nature, just think about how light activates our neurons and wakes us up from sleep. There is no simpler example than that.
The activation of a neuron, perceptron, is essentially the mechanism that triggers the output signal. This applies to the perceptron as well as to any layer of a neural network, but this topic will be extensively addressed in future posts.
An activation, as we mentioned, can be considered a simple switch that influences the output result, in our example, the turning on of the lamp in the lampshade.
source: https://www.freepik.com/free-photos-vectors/light-switch
Naturally, the mechanism of a lamp is simpler than that of a perceptron, although there isn't much of a difference. It is not overly simpler.
If we think about the lamp's switch, we must imagine many small switches that will affect whether the light bulb will turn on or not. Many switches act as inputs, and the light of the lamp acts as the output.
Let's say that if enough switches are pushed, the light bulb will turn on, but if not enough switches are pushed, it will remain off.
There is a threshold that must be reached to trigger the activation. One could say that the outputs of the switches add up, and if this sum is high enough, the light will turn on.
source: https://en.wikipedia.org/wiki/File:Components_of_neuron.jpg
Observing the image of the neuron, we can think of each dendrite as one of our buttons, and if activated/clicked, it can trigger a part of the activation.
Now, I really want to emphasize this concept and try to provide another example, even at the risk of being banal and repetitive, because I want to be absolutely sure that the message has been conveyed.
Let's now try to imagine creating a system of interconnected vessels using the structure we used for the lamp buttons.
That is, we have many glasses that can be filled with water, and these glasses have a tube connected to another glass. When a first glass is filled enough to reach the tube, it starts pouring water into the glass below.
If the final glass is also filled enough, it will start pouring water. This situation can be represented as the activation of a neural network, where we trigger the water output with distinct and combined input actions, in our example, by adding water to the different containers.
So, I brought out all my best Paint skills to provide a representation of what I just described.
Summing up and trying to keep it as simple as possible, this is what happens in the activation of a perceptron and a neural network.
Looking at the official representation, we can use our imagination and associate our example of glasses with various elements like input layers, weights, activation functions, output layers. These are all topics that will be covered in the upcoming posts of the series. However, it is already interesting to see how they can be associated together.
This represents the initial part of how a neural network works. Of course, it is still difficult to relate it to everyday applications such as computer vision, text processing like ChatGPT, etc. However, it is a tangible first step to understand, without too much confusion, how artificial intelligence really works, without too much speculation.
Top comments (0)