*Memos:
- My post explains Batch Normalization Layer.
- My post explains BatchNorm1d().
- My post explains BatchNorm2d().
- My post explains LayerNorm().
- My post explains requires_grad.
BatchNorm3d() can get the 5D tensor of the zero or more elements computed by 3D Batch Normalization from the 5D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument for initialization is
num_features
(Required-Type:int
). *It must be1 <= x
. - The 2nd argument for initialization is
eps
(Optional-Default:1e-05
-Type:float
). - The 3rd argument for initialization is
momentum
(Optional-Default:0.1
-Type:float
). - The 4th argument for initialization is
affine
(Optional-Default:True
-Type:bool
). - The 5th argument for initialization is
track_running_stats
(Optional-Default:True
-Type:bool
). - The 6th argument for initialization is
device
(Optional-Default:None
-Type:str
,int
or device()): *Memos:- If it's
None
, get_default_device() is used. *My post explainsget_default_device()
and set_default_device(). -
device=
can be omitted. -
My post explains
device
argument.
- If it's
- The 7th argument for initialization is
dtype
(Optional-Default:None
-Type:dtype): *Memos:- If it's
None
, get_default_dtype() is used. *My post explainsget_default_dtype()
and set_default_dtype(). -
dtype=
can be omitted. -
My post explains
dtype
argument.
- If it's
- The 1st argument is
input
(Required-Type:tensor
offloat
): *Memos:- It must be the 5D tensor of zero or more elements.
- The number of the elements of the 2nd shallowest dimension must be same as
num_features
. - Its
device
anddtype
must be same asBatchNorm3d()
's. - The tensor's
requires_grad
which isFalse
by default is set toTrue
byBatchNorm3d()
.
-
batchnorm3d1.device
andbatchnorm3d1.dtype
don't work.
import torch
from torch import nn
tensor1 = torch.tensor([[[[[8., -3., 0., 1., 5., -2.]]]]])
tensor1.requires_grad
# False
batchnorm3d1 = nn.BatchNorm3d(num_features=1)
tensor2 = batchnorm3d1(input=tensor1)
tensor2
# tensor([[[[[1.6830, -1.1651, -0.3884, -0.1295, 0.9062, -0.9062]]]]],
# grad_fn=<NativeBatchNormBackward0>)
tensor2.requires_grad
# True
batchnorm3d1
# BatchNorm3d(1, eps=1e-05, momentum=0.1, affine=True,
# track_running_stats=True)
batchnorm3d1.num_features
# 1
batchnorm3d1.eps
# 1e-05
batchnorm3d1.momentum
# 0.1
batchnorm3d1.affine
# True
batchnorm3d1.track_running_stats
# True
batchnorm3d2 = nn.BatchNorm3d(num_features=1)
batchnorm3d2(input=tensor2)
# tensor([[[[[1.6830, -1.1651, -0.3884, -0.1295, 0.9062, -0.9062]]]]],
# grad_fn=<NativeBatchNormBackward0>)
batchnorm3d = nn.BatchNorm3d(num_features=1, eps=1e-05, momentum=0.1,
affine=True, track_running_stats=True,
device=None, dtype=None)
batchnorm3d(input=tensor1)
# tensor([[[[[1.6830, -1.1651, -0.3884, -0.1295, 0.9062, -0.9062]]]]],
# grad_fn=<NativeBatchNormBackward0>)
my_tensor = torch.tensor([[[[[8., -3., 0.],
[1., 5., -2.]]]]])
batchnorm3d = nn.BatchNorm3d(num_features=1)
batchnorm3d(input=my_tensor)
# tensor([[[[[1.6830, -1.1651, -0.3884],
# [-0.1295, 0.9062, -0.9062]]]]],
# grad_fn=<NativeBatchNormBackward0>)
my_tensor = torch.tensor([[[[[8.], [-3.], [0.], [1.], [5.], [-2.]]]]])
batchnorm3d = nn.BatchNorm3d(num_features=1)
batchnorm3d(input=my_tensor)
# tensor([[[[[1.6830], [-1.1651], [-0.3884], [-0.1295], [0.9062], [-0.9062]]]]],
# grad_fn=<NativeBatchNormBackward0>)
my_tensor = torch.tensor([[[[[8.], [-3.], [0.]],
[[1.], [5.], [-2.]]]]])
batchnorm3d = nn.BatchNorm3d(num_features=1)
batchnorm3d(input=my_tensor)
# tensor([[[[[1.6830], [-1.1651], [-0.3884]],
# [[-0.1295], [0.9062], [-0.9062]]]]],
# grad_fn=<NativeBatchNormBackward0>)
Top comments (0)