*Memos:
- My post explains Convolutional Layer.
- My post explains Conv1d().
- My post explains Conv3d().
- My post explains manual_seed().
- My post explains requires_grad.
Conv2d() can get the 3D or 4D tensor of the zero or more elements computed by 2D convolution from the 3D or 4D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument for initialization is
in_channels
(Required-Type:float
). *It must be0 <= x
. - The 2nd argument for initialization is
out_channels
(Required-Type:float
). *It must be1 <= x
. - The 3rd argument for initialization is
kernel_size
(Required-Type:int
ortuple
orlist
ofint
). *It must be1 <= x
. - The 4th argument for initialization is
stride
(Optional-Default:1
-Type:int
ortuple
orlist
ofint
). *It must be1 <= x
. - The 5th argument for initialization is
padding
(Optional-Default:0
-Type:int
,str
ortuple
orlist
ofint
): *Memos:- It must be
0 <= x
if notstr
. - It must be either
'valid'
or'same'
forstr
.
- It must be
- The 6th argument for initialization is
dilation
(Optional-Default:1
-Type:int
,tuple
orlist
ofint
). *It must be1 <= x
. - The 7th argument for initialization is
groups
(Optional-Default:1
-Type:int
). *It must be1 <= x
. - The 8th argument for initialization is
bias
(Optional-Default:True
-Type:bool
). *My post explainsbias
argument. - The 9th argument for initialization is
padding_mode
(Optional-Default:'zeros'
-Type:str
). *'zeros'
,'reflect'
,'replicate'
or'circular'
can be selected. - The 10th argument for initialization is
device
(Optional-Default:None
-Type:str
,int
or device()): *Memos:- If it's
None
, get_default_device() is used. *My post explainsget_default_device()
and set_default_device(). -
device=
can be omitted. -
My post explains
device
argument.
- If it's
- The 11th argument for initialization is
dtype
(Optional-Default:None
-Type:dtype): *Memos:- If it's
None
, get_default_dtype() is used. *My post explainsget_default_dtype()
and set_default_dtype(). -
dtype=
can be omitted. -
My post explains
dtype
argument.
- If it's
- The 1st argument is
input
(Required-Type:tensor
offloat
orcomplex
): *Memos:- It must be the 3D or 4D tensor of zero or more elements.
- The number of the elements of the 3rd deepest dimension must be same as
in_channels
. - Its
device
anddtype
must be same asConv2d()
's. -
complex
must be set todtype
ofConv2d()
to use acomplex
tensor. - The tensor's
requires_grad
which isFalse
by default is set toTrue
byConv2d()
.
-
conv2d1.device
andconv2d1.dtype
don't work.
import torch
from torch import nn
tensor1 = torch.tensor([[[8., -3., 0., 1., 5., -2.]]])
tensor1.requires_grad
# False
torch.manual_seed(42)
conv2d1 = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=1)
tensor2 = conv2d1(input=tensor1)
tensor2
# tensor([[[7.0349, -1.3750, 0.9186, 1.6831, 4.7413, -0.6105]],
# [[6.4210, -2.7091, -0.2191, 0.6109, 3.9309, -1.8791]],
# [[-1.6724, 0.9046, 0.2018, -0.0325, -0.9696, 0.6703]]],
# grad_fn=<SqueezeBackward1>)
tensor2.requires_grad
# True
conv2d1
# Conv2d(1, 3, kernel_size=(1, 1), stride=(1, 1))
conv2d1.in_channels
# 1
conv2d1.out_channels
# 3
conv2d1.kernel_size
# (1, 1)
conv2d1.stride
# (1, 1)
conv2d1.padding
# (0, 0)
conv2d1.dilation
# (1, 1)
conv2d1.groups
# 1
conv2d1.bias
# Parameter containing:
# tensor([0.9186, -0.2191, 0.2018], requires_grad=True)
conv2d1.padding_mode
# 'zeros'
conv2d1.weight
# Parameter containing:
# tensor([[[[0.7645]]], [[[0.8300]]], [[[-0.2343]]]], requires_grad=True)
torch.manual_seed(42)
conv2d2 = nn.Conv2d(in_channels=3, out_channels=3, kernel_size=1)
conv2d2(input=tensor2)
# tensor([[[5.9849, -2.4511, -0.1504, 0.6165, 3.6841, -1.6842]],
# [[3.2258, 0.2207, 1.0403, 1.3134, 2.4062, 0.4939]],
# [[-0.5434, 0.0364, -0.1217, -0.1744, -0.3853, -0.0163]]],
# grad_fn=<SqueezeBackward1>)
torch.manual_seed(42)
conv2d = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=1, stride=1,
padding=0, dilation=1, groups=1, bias=True,
padding_mode='zeros', device=None, dtype=None)
conv2d(input=tensor1)
# tensor([[[7.0349, -1.3750, 0.9186, 1.6831, 4.7413, -0.6105]],
# [[6.4210, -2.7091, -0.2191, 0.6109, 3.9309, -1.8791]],
# [[-1.6724, 0.9046, 0.2018, -0.0325, -0.9696, 0.6703]]],
# grad_fn=<SqueezeBackward1>)
my_tensor = torch.tensor([[[8., -3., 0.],
[1., 5., -2.]]])
torch.manual_seed(42)
conv2d = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=1)
conv2d(input=my_tensor)
# tensor([[[7.0349, -1.3750, 0.9186], [1.6831, 4.7413, -0.6105]],
# [[6.4210, -2.7091, -0.2191], [0.6109, 3.9309, -1.8791]],
# [[-1.6724, 0.9046, 0.2018], [-0.0325, -0.9696, 0.6703]]],
# grad_fn=<SqueezeBackward1>)
my_tensor = torch.tensor([[[8.], [-3.], [0.],
[1.], [5.], [-2.]]])
torch.manual_seed(42)
conv2d = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=1)
conv2d(input=my_tensor)
# tensor([[[7.0349], [-1.3750], [0.9186], [1.6831], [4.7413], [-0.6105]],
# [[6.4210], [-2.7091], [-0.2191], [0.6109], [3.9309], [-1.8791]],
# [[-1.6724], [0.9046], [0.2018], [-0.0325], [-0.9696], [0.6703]]],
# grad_fn=<SqueezeBackward1>)
my_tensor = torch.tensor([[[[8.], [-3.], [0.]],
[[1.], [5.], [-2.]]]])
torch.manual_seed(42)
conv2d = nn.Conv2d(in_channels=2, out_channels=3, kernel_size=1)
conv2d(input=my_tensor)
# tensor([[[[4.5675], [0.9684], [-1.5181]],
# [[-0.2604], [4.1600], [-0.8838]],
# [[-0.4734], [1.8016], [0.3380]]]],
# grad_fn=<ConvolutionBackward0>)
my_tensor = torch.tensor([[[[8.+0.j], [-3.+0.j], [0.+0.j]],
[[1.+0.j], [5.+0.j], [-2.+0.j]]]])
torch.manual_seed(42)
conv2d = nn.Conv2d(in_channels=2, out_channels=3, kernel_size=1,
dtype=torch.complex64)
conv2d(input=my_tensor)
# tensor([[[[4.6816+5.4406j], [-1.9277+1.5828j], [0.8537-1.2033j]],
# [[-1.2427+1.4569j], [-0.9155+1.5485j], [1.0295-0.9304j]],
# [[6.1465-3.9132j], [1.7481+2.3225j], [-0.6841-0.1602j]]]],
# grad_fn=<AddBackward0>)
Top comments (0)