*Memos:
- My post explains Batch Normalization Layer.
- My post explains BatchNorm2d().
- My post explains BatchNorm3d().
- My post explains LayerNorm().
- My post explains requires_grad.
BatchNorm1d() can get the 2D or 3D tensor of the zero or more elements computed by 1D Batch Normalization from the 2D or 3D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument for initialization is
num_features
(Required-Type:int
). *It must be1 <= x
. - The 2nd argument for initialization is
eps
(Optional-Default:1e-05
-Type:float
). - The 3rd argument for initialization is
momentum
(Optional-Default:0.1
-Type:float
). - The 4th argument for initialization is
affine
(Optional-Default:True
-Type:bool
). - The 5th argument for initialization is
track_running_stats
(Optional-Default:True
-Type:bool
). - The 6th argument for initialization is
device
(Optional-Default:None
-Type:str
,int
or device()): *Memos:- If it's
None
, get_default_device() is used. *My post explainsget_default_device()
and set_default_device(). -
device=
can be omitted. -
My post explains
device
argument.
- If it's
- The 7th argument for initialization is
dtype
(Optional-Default:None
-Type:dtype): *Memos:- If it's
None
, get_default_dtype() is used. *My post explainsget_default_dtype()
and set_default_dtype(). -
dtype=
can be omitted. -
My post explains
dtype
argument.
- If it's
- The 1st argument is
input
(Required-Type:tensor
offloat
): *Memos:- It must be the 2D or 3D tensor of zero or more elements.
- The number of the elements of the 2nd shallowest dimension must be same as
num_features
. - Its
device
anddtype
must be same asBatchNorm1d()
's. - The tensor's
requires_grad
which isFalse
by default is set toTrue
byBatchNorm1d()
.
-
batchnorm1d1.device
andbatchnorm1d1.dtype
don't work.
import torch
from torch import nn
tensor1 = torch.tensor([[8., -3., 0.],
[1., 5., -2.]])
tensor1.requires_grad
# False
batchnorm1d1 = nn.BatchNorm1d(num_features=3)
tensor2 = batchnorm1d1(input=tensor1)
tensor2
# tensor([[1.0000, -1.0000, 1.0000],
# [-1.0000, 1.0000, -1.0000]],
# grad_fn=<NativeBatchNormBackward0>)
tensor2.requires_grad
# True
batchnorm1d1
# BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True,
# track_running_stats=True)
batchnorm1d1.num_features
# 3
batchnorm1d1.eps
# 1e-05
batchnorm1d1.momentum
# 0.1
batchnorm1d1.affine
# True
batchnorm1d1.track_running_stats
# True
batchnorm1d2 = nn.BatchNorm1d(num_features=3)
batchnorm1d2(input=tensor2)
# tensor([[1.0000, -1.0000, 1.0000],
# [-1.0000, 1.0000, -1.0000]],
# grad_fn=<NativeBatchNormBackward0>)
batchnorm1d = nn.BatchNorm1d(num_features=3, eps=1e-05, momentum=0.1,
affine=True, track_running_stats=True,
device=None, dtype=None)
batchnorm1d(input=tensor1)
# tensor([[1.0000, -1.0000, 1.0000],
# [-1.0000, 1.0000, -1.0000]],
# grad_fn=<NativeBatchNormBackward0>)
my_tensor = torch.tensor([[8.], [-3.], [0.], [1.], [5.], [-2.]])
batchnorm1d = nn.BatchNorm1d(num_features=1)
batchnorm1d(input=my_tensor)
# tensor([[1.6830], [-1.1651], [-0.3884], [-0.1295], [0.9062], [-0.9062]],
# grad_fn=<NativeBatchNormBackward0>)
my_tensor = torch.tensor([[[8.], [-3.], [0.]],
[[1.], [5.], [-2.]]])
batchnorm1d = nn.BatchNorm1d(num_features=3)
batchnorm1d(input=my_tensor)
# tensor([[[1.0000], [-1.0000], [1.0000]],
# [[-1.0000], [1.0000], [-1.0000]]],
# grad_fn=<NativeBatchNormBackward0>)
Top comments (0)