DEV Community

Super Kai (Kazuya Ito)
Super Kai (Kazuya Ito)

Posted on • Updated on

linalg.norm in PyTorch

Buy Me a Coffee

*Memos:

linalg.norm() can get the 0D or more D tensor of the zero or more elements computed with norm from the 0D or more D tensor of zero or more elements as shown below:

*Memos:

  • linalg.norm() can be used with torch but not with a tensor. *A tensor can use torch.norm which is deprecated but not linalg.norm().
  • The 1st argument with torch is input(Required-Type:tensor of float or complex): *Memos:
    • It must be the 0D or more D tensor of zero or more elements.
    • A complex tensor returns a float tensor even if dtype=torch.complex64 is set to linalg.norm() which is a bug.
  • The 2nd argument with torch is ord(Optional-Default:None-Type:int, float or str). *It sets a norm.
  • The 3rd argument with torch is dim(Optional-Default:None-Type:int or tuple or list of int). *The length of tuple or list of int must be 1 or 2.
  • The 4th argument with torch is keepdim(Optional-Default:False-Type:bool). *My post explains keepdim argument.
  • There is out argument with torch(Optional-Default:None-Type:tensor): *Memos:
    • out= must be used.
    • My post explains out argument.
  • There is dtype argument with torch(Optional-Default:None-Type:dtype): *Memos:
    • If it's None, it's inferred from input.
    • dtype= must be used.
    • My post explains dtype argument.
  • If dim is int or the length 1 of tuple or list of int, a vector norm is computed.
  • If dim is the length 2 of tuple or list of int, a matrix norm is computed.
  • If ord and dim are None, the input tensor is flattened to a 1D tensor and the L2 norm of the resulting vector will be computed.
  • If ord is not None and dim is None, input must be 1D or 2D tensor.
  • ord supports the following norms: *Memos:
    • inf can be torch.inf, float('inf'), etc:
    • For vector, there are also L3 norm(ord=3), L4 norm(ord=4) etc.
ord Norm for vector Norm for matrix
None(Default) L2 norm Frobenius norm
'fro' Not supported Frobenius norm
'nuc' Not supported Nuclear norm
inf max(abs(x)) max(sum(abs(x), dim=1))
-inf min(abs(x)) min(sum(abs(x), dim=1))
0 sum(x != 0)(L0 norm) Not supported
1 sum(abs(x)^{ord})^{(1/ord)}(L1 norm) max(sum(abs(x), dim=0))
-1 Same as above min(sum(abs(x), dim=0))
2 Same as above(L2 norm) The largest singular value of SVD(Singular Value Decomposition)
-2 Same as above The smallest singular value of SVD
Other int or float Same as above Not supported
import torch
from torch import linalg

my_tensor = torch.tensor([-3., -2., -1., 0., 1., 2., 3., 4.])

linalg.norm(input=my_tensor) # L2 norm
# tensor(6.6332)

linalg.norm(input=my_tensor, ord=0) # L0 norm
# tensor(7.)

linalg.norm(input=my_tensor, ord=1) # L1 norm
# tensor(16.)

linalg.norm(input=my_tensor, ord=-1)
# tensor(0.)

linalg.norm(input=my_tensor, ord=2) # L2 norm
# tensor(6.6332)

linalg.norm(input=my_tensor, ord=-2)
# tensor(0.)

linalg.norm(input=my_tensor, ord=torch.inf)
# tensor(4.)

linalg.norm(input=my_tensor, ord=-torch.inf)
# tensor(0.)

my_tensor = torch.tensor([[-3., -2., -1., 0.],
                          [1., 2., 3., 4.]])
linalg.norm(input=my_tensor) # L2 norm
# tensor(6.6332)

linalg.norm(input=my_tensor, ord=1)
# tensor(4.)

linalg.norm(input=my_tensor, ord=-1)
# tensor(4.)

linalg.norm(input=my_tensor, ord=2)
# tensor(5.8997)

linalg.norm(input=my_tensor, ord=-2)
# tensor(3.0321)

linalg.norm(input=my_tensor, ord=torch.inf)
# tensor(10.)

linalg.norm(input=my_tensor, ord=-torch.inf)
# tensor(6.)

linalg.norm(input=my_tensor, ord='fro') # Frobenius norm
# tensor(6.6332)

linalg.norm(input=my_tensor, ord='nuc') # Nuclear norm
# tensor(8.9318)

my_tensor = torch.tensor([[[-3., -2.], [-1., 0.]],
                          [[1., 2.], [3., 4.]]])
linalg.norm(input=my_tensor) # L2 norm
# tensor(6.6332)

linalg.norm(input=my_tensor, ord=0, dim=2) # L0 norm
linalg.norm(input=my_tensor, ord=0, dim=-1) # L0 norm
# tensor([[2., 1.],
#         [2., 2.]])

linalg.norm(input=my_tensor, ord=1, dim=2) # L1 norm
linalg.norm(input=my_tensor, ord=1, dim=-1) # L1 norm
# tensor([[5., 1.],
#         [3., 7.]])

linalg.norm(input=my_tensor, ord=-1, dim=2)
linalg.norm(input=my_tensor, ord=-1, dim=-1)
# tensor([[1.2000, 0.0000],
#         [0.6667, 1.7143]])

linalg.norm(input=my_tensor, ord=2, dim=2) # L2 norm
linalg.norm(input=my_tensor, ord=2, dim=-1) # L2 norm
# tensor([[3.6056, 1.0000],
#         [2.2361, 5.0000]])

linalg.norm(input=my_tensor, ord=-2, dim=2)
linalg.norm(input=my_tensor, ord=-2, dim=-1)
# tensor([[1.6641, 0.0000],
#         [0.8944, 2.4000]])

linalg.norm(input=my_tensor, ord=torch.inf, dim=2)
linalg.norm(input=my_tensor, ord=torch.inf, dim=-1)
# tensor([[3., 1.],
#         [2., 4.]])

linalg.norm(input=my_tensor, ord=-torch.inf, dim=2)
linalg.norm(input=my_tensor, ord=-torch.inf, dim=-1)
# tensor([[2., 0.],
#         [1., 3.]])

# Frobenius norm
linalg.norm(input=my_tensor, ord='fro', dim=(1, 2))
# tensor([3.7417, 5.4772])

# Nuclear norm
linalg.norm(input=my_tensor, ord='nuc', dim=(1, 2))
# tensor([4.2426, 5.8310])

my_tensor = torch.tensor([[[-3.+0.j, -2.+0.j], [-1.+0.j, 0.+0.j]],
                          [[1.+0.j, 2.+0.j], [3.+0.j, 4.+0.j]]])
linalg.norm(input=my_tensor, dtype=torch.complex64) # L2 norm
# tensor(6.6332)
Enter fullscreen mode Exit fullscreen mode

Top comments (0)