*Memos:
- My post explains Tanh, Softsign, Sigmoid and Softmax.
- My post explains heaviside() and Identity().
- My post explains ReLU() and LeakyReLU().
- My post explains PReLU() and ELU().
- My post explains SELU() and CELU().
- My post explains GELU() and Mish().
- My post explains SiLU() and Softplus().
- My post explains Tanh() and Softsign().
Sigmoid() can get the 0D or more D tensor of the zero or more values computed by Sigmoid function from the 0D or more D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument is
input
(Required-Type:tensor
ofint
,float
,complex
orbool
). *Afloat
tensor is returned except for acomplex
input tensor. - You can also use torch.sigmoid() which is the alias of torch.special.expit().
import torch
from torch import nn
my_tensor = torch.tensor([8., -3., 0., 1., 5., -2., -1., 4.])
sigmoid = nn.Sigmoid()
sigmoid(input=my_tensor)
# tensor([0.9997, 0.0474, 0.5000, 0.7311, 0.9933, 0.1192, 0.2689, 0.9820])
sigmoid
# Sigmoid()
my_tensor = torch.tensor([[8., -3., 0., 1.],
[5., -2., -1., 4.]])
sigmoid = nn.Sigmoid()
sigmoid(input=my_tensor)
# tensor([[0.9997, 0.0474, 0.5000, 0.7311],
# [0.9933, 0.1192, 0.2689, 0.9820]])
my_tensor = torch.tensor([[[8., -3.], [0., 1.]],
[[5., -2.], [-1., 4.]]])
sigmoid = nn.Sigmoid()
sigmoid(input=my_tensor)
# tensor([[[0.9997, 0.0474], [0.5000, 0.7311]],
# [[0.9933, 0.1192], [0.2689, 0.9820]]])
my_tensor = torch.tensor([[[8, -3], [0, 1]],
[[5, -2], [-1, 4]]])
sigmoid = nn.Sigmoid()
sigmoid(input=my_tensor)
# tensor([[[0.9997, 0.0474], [0.5000, 0.7311]],
# [[0.9933, 0.1192], [0.2689, 0.9820]]])
my_tensor = torch.tensor([[[8.+0.j, -3.+0.j], [0.+0.j, 1.+0.j]],
[[5.+0.j, -2.+0.j], [-1.+0.j, 4.+0.j]]])
sigmoid = nn.Sigmoid()
sigmoid(input=my_tensor)
# tensor([[[0.9997-0.j, 0.0474-0.j], [0.5000-0.j, 0.7311-0.j]],
# [[0.9933-0.j, 0.1192-0.j], [0.2689-0.j, 0.9820-0.j]]])
my_tensor = torch.tensor([[[True, False], [True, False]],
[[False, True], [False, True]]])
sigmoid = nn.Sigmoid()
sigmoid(input=my_tensor)
# tensor([[[0.7311, 0.5000], [0.7311, 0.5000]],
# [[0.5000, 0.7311], [0.5000, 0.7311]]])
Softmax() can get the 0D or more D tensor of the zero or more values computed by Softmax function from the 0D or more D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument for initialization is
dim
(Required-Type:float
). *Unsetting it works but there is a warning and the way is deprecated. - The 1st argument is
input
(Required-Type:tensor
offloat
). - You can also use torch.nn.functional.softmax() with a tensor.
import torch
from torch import nn
my_tensor = torch.tensor([8., -3., 0., 1., 5., -2., -1., 4.])
softmax = nn.Softmax(dim=0)
softmax(input=my_tensor)
my_tensor.softmax(dim=0)
# tensor([9.3499e-01, 1.5616e-05, 3.1365e-04, 8.5260e-04,
# 4.6550e-02, 4.2448e-05,1.1539e-04, 1.7125e-02])
softmax
# Softmax(dim=0)
softmax.dim
# 0
softmax = nn.Softmax(dim=-1)
softmax(input=my_tensor)
# tensor([9.3499e-01, 1.5616e-05, 3.1365e-04, 8.5260e-04,
# 4.6550e-02, 4.2448e-05, 1.1539e-04, 1.7125e-02])
my_tensor = torch.tensor([[8., -3., 0., 1.],
[5., -2., -1., 4.]])
softmax = nn.Softmax(dim=0)
softmax(input=my_tensor)
# tensor([[0.9526, 0.2689, 0.7311, 0.0474],
# [0.0474, 0.7311, 0.2689, 0.9526]])
softmax = nn.Softmax(dim=-2)
softmax(input=my_tensor)
# tensor([[0.9526, 0.2689, 0.7311, 0.0474],
# [0.0474, 0.7311, 0.2689, 0.9526]])
softmax = nn.Softmax(dim=1)
softmax(input=my_tensor)
# tensor([[9.9874e-01, 1.6681e-05, 3.3504e-04, 9.1073e-04],
# [7.2925e-01, 6.6499e-04, 1.8076e-03, 2.6828e-01]])
softmax = nn.Softmax(dim=-1)
softmax(input=my_tensor)
# tensor([[9.9874e-01, 1.6681e-05, 3.3504e-04, 9.1073e-04],
# [7.2925e-01, 6.6499e-04, 1.8076e-03, 2.6828e-01]])
my_tensor = torch.tensor([[[8., -3.], [0., 1.]],
[[5., -2.], [-1., 4.]]])
softmax = nn.Softmax(dim=0)
softmax(input=my_tensor)
# tensor([[[0.9526, 0.2689], [0.7311, 0.0474]],
# [[0.0474, 0.7311], [0.2689, 0.9526]]])
softmax = nn.Softmax(dim=-3)
softmax(input=my_tensor)
# tensor([[[0.9526, 0.2689], [0.7311, 0.0474]],
# [[0.0474, 0.7311], [0.2689, 0.9526]]])
softmax = nn.Softmax(dim=1)
softmax(input=my_tensor)
# tensor([[[9.9966e-01, 1.7986e-02], [3.3535e-04, 9.8201e-01]],
# [[9.9753e-01, 2.4726e-03], [2.4726e-03, 9.9753e-01]]])
softmax = nn.Softmax(dim=-2)
softmax(input=my_tensor)
# tensor([[[9.9966e-01, 1.7986e-02], [3.3535e-04, 9.8201e-01]],
# [[9.9753e-01, 2.4726e-03], [2.4726e-03, 9.9753e-01]]])
softmax = nn.Softmax(dim=2)
softmax(input=my_tensor)
# tensor([[[9.9998e-01, 1.6701e-05], [2.6894e-01, 7.3106e-01]],
# [[9.9909e-01, 9.1105e-04], [6.6929e-03, 9.9331e-01]]])
softmax = nn.Softmax(dim=-1)
softmax(input=my_tensor)
# tensor([[[9.9998e-01, 1.6701e-05], [2.6894e-01, 7.3106e-01]],
# [[9.9909e-01, 9.1105e-04], [6.6929e-03, 9.9331e-01]]])
Top comments (0)