*Memos:
- My post explains Tanh, Softsign, Sigmoid and Softmax.
- My post explains heaviside() and Identity().
- My post explains ReLU() and LeakyReLU().
- My post explains PReLU() and ELU().
- My post explains SELU() and CELU().
- My post explains GELU() and Mish().
- My post explains SiLU() and Softplus().
- My post explains Sigmoid() and Softmax().
Tanh() can get the 0D or more D tensor of the zero or more values computed by Tanh function from the 0D or more D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument is
input
(Required-Type:tensor
ofint
,float
,complex
orbool
). *Afloat
tensor is returned except for acomplex
input tensor. - You can also use torch.tanh() with a tensor.
import torch
from torch import nn
my_tensor = torch.tensor([8., -3., 0., 1., 5., -2., -1., 4.])
tanh = nn.Tanh()
tanh(input=my_tensor)
my_tensor.tanh()
# tensor([1.0000, -0.9951, 0.0000, 0.7616, 0.9999, -0.9640, -0.7616, 0.9993])
tanh
# Tanh()
my_tensor = torch.tensor([[8., -3., 0., 1.],
[5., -2., -1., 4.]])
tanh = nn.Tanh()
tanh(input=my_tensor)
# tensor([[1.0000, -0.9951, 0.0000, 0.7616],
# [0.9999, -0.9640, -0.7616, 0.9993]])
my_tensor = torch.tensor([[[8., -3.], [0., 1.]],
[[5., -2.], [-1., 4.]]])
tanh = nn.Tanh()
tanh(input=my_tensor)
# tensor([[[1.0000, -0.9951], [0.0000, 0.7616]],
# [[0.9999, -0.9640], [-0.7616, 0.9993]]])
my_tensor = torch.tensor([[[8, -3], [0, 1]],
[[5, -2], [-1, 4]]])
tanh = nn.Tanh()
tanh(input=my_tensor)
# tensor([[[1.0000, -0.9951], [0.0000, 0.7616]],
# [[0.9999, -0.9640], [-0.7616, 0.9993]]])
my_tensor = torch.tensor([[[8.+0.j, -3.+0.j], [0.+0.j, 1.+0.j]],
[[5.+0.j, -2.+0.j], [-1.+0.j, 4.+0.j]]])
tanh = nn.Tanh()
tanh(input=my_tensor)
# tensor([[[1.0000+0.j, -0.9951+0.j], [0.0000+0.j, 0.7616+0.j]],
# [[0.9999+0.j, -0.9640+0.j], [-0.7616+0.j, 0.9993+0.j]]])
my_tensor = torch.tensor([[[True, False], [True, False]],
[[False, True], [False, True]]])
tanh = nn.Tanh()
tanh(input=my_tensor)
# tensor([[[0.7616, 0.0000], [0.7616, 0.0000]],
# [[0.0000, 0.7616], [0.0000, 0.7616]]])
Softsign() can get the 0D or more D tensor of the zero or more values computed by Softsign function from the 0D or more D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument is
input
(Required-Type:tensor
ofint
,float
orcomplex
). *Afloat
tensor is returned except for acomplex
input tensor.
import torch
from torch import nn
my_tensor = torch.tensor([8., -3., 0., 1., 5., -2., -1., 4.])
softsign = nn.Softsign()
softsign(input=my_tensor)
# tensor([0.8889, -0.7500, 0.0000, 0.5000, 0.8333, -0.6667, -0.5000, 0.8000])
softsign
# Softsign()
my_tensor = torch.tensor([[8., -3., 0., 1.],
[5., -2., -1., 4.]])
softsign = nn.Softsign()
softsign(input=my_tensor)
# tensor([[0.8889, -0.7500, 0.0000, 0.5000],
# [0.8333, -0.6667, -0.5000, 0.8000]])
my_tensor = torch.tensor([[[8., -3.], [0., 1.]],
[[5., -2.], [-1., 4.]]])
softsign = nn.Softsign()
softsign(input=my_tensor)
# tensor([[[0.8889, -0.7500], [0.0000, 0.5000]],
# [[0.8333, -0.6667], [-0.5000, 0.8000]]])
my_tensor = torch.tensor([[[8, -3], [0, 1]],
[[5, -2], [-1, 4]]])
softsign = nn.Softsign()
softsign(input=my_tensor)
# tensor([[[0.8889, -0.7500], [0.0000, 0.5000]],
# [[0.8333, -0.6667], [-0.5000, 0.8000]]])
my_tensor = torch.tensor([[[8.+0.j, -3.+0.j], [0.+0.j, 1.+0.j]],
[[5.+0.j, -2.+0.j], [-1.+0.j, 4.+0.j]]])
softsign = nn.Softsign()
softsign(input=my_tensor)
# tensor([[[0.8889+0.j, -0.7500+0.j], [0.0000+0.j, 0.5000+0.j]],
# [[0.8333+0.j, -0.6667+0.j], [-0.5000+0.j, 0.8000+0.j]]])
Top comments (0)