*Memos:
- My post explains Overfitting and Underfitting.
- My post explains layers in PyTorch.
- My post explains activation functions in PyTorch.
- My post explains loss functions in PyTorch.
- My post explains optimizers in PyTorch.
Vanishing Gradient Problem:
- is during backpropagation, a gradient gets smaller and smaller or gets zero, multiplying small gradients together many times as going from output layer to input layer, then a model cannot be trained effectively.
- more easily occurs with more layers in a model.
- is easily caused by Sigmoid activation function which is Sigmoid() in PyTorch because it produces the small values whose ranges are
0<=x<=1
, then they are multiplied many times, making a gradient smaller and smaller as going from output layer to input layer. - occurs in:
- CNN(Convolutional Neural Network).
- RNN(Recurrent Neural Network) which is RNN() in PyTorch.
- doesn't easily occur in:
- LSTM(Long Short-Term Memory) which is LSTM() in PyTorch.
- GRU(Gated Recurrent Unit) which is GRU() in PyTorch.
- Resnet(Residual Neural Network) which is Resnet in PyTorch.
- Transformer which is Transformer() in PyTorch.
- etc.
- can be detected if:
- parameters significantly change at the layers near output layer whereas parameters slightly change or stay unchanged at the layers near input layer.
- The weights of the layers near input layer are close to 0 or become 0.
- convergence is slow or stopped.
- can be mitigated by:
- Batch Normalization layer which is BatchNorm1d(), BatchNorm2d() or BatchNorm3d() in PyTorch.
- Leaky ReLU activation function which is LeakyReLU() in PyTorch. *You can also use ReLU activation function which is ReLU() in PyTorch but it sometimes causes Dying ReLU Problem which I explain later.
- PReLU activation function which is PReLU() in PyTorch.
- ELU activation function which is ELU() in PyTorch.
- Gradient Clipping which is clip_grad_norm_() or clip_grad_value_() in PyTorch. *Gradient Clipping is the method to keep a gradient in a specified range.
Exploding Gradients Problem:
- is during backpropagation, a gradient gets bigger and bigger, multiplying bigger gradients together many times as going from output layer to input layer, then convergence gets impossible.
- more easily occurs with more layers in a model.
- occurs in:
- CNN.
- RNN.
- LSTM.
- GRU.
- doesn't easily occur in:
- Resnet.
- Transformer.
- etc.
- can be detected if:
- The weights of a model significantly increase.
- The weights of a model significantly increasing finally become NaN.
- convergence is fluctuating without finished.
- can be mitigated by:
- Batch Normalization layer.
- Gradient Clipping.
Dying ReLU Problem:
- is during backpropagation, once the nodes(neurons) with ReLU activation function recieve zero or negative input values, they always produce zero for any input values, finally, they are never recovered to produce any values except zero, then a model cannot be trained effectively.
- is also called Dead ReLU problem.
- more easily occurs with:
- higher learning rates.
- higher negative bias.
- can be detected if:
- convergence is slow or stopped.
- a loss function returns
nan
.
- can be mitigated by:
- lower learning rate.
- a positive bias.
- Leaky ReLU activation function.
- PReLU activation function.
- ELU activation function.
Top comments (0)