channel_normalize

dragon.vm.torch.channel_normalize(
  input,
  mean,
  std,
  dim=- 1,
  dtype='float32',
  dims=None
)[source]

Normalize channels with mean and standard deviation.

The dim can be negative representing the last-k dimension:

m = s = (1., 1., 1.)
x = torch.tensor([1, 2, 3])
print(torch.channel_normalize(x, m, s, dim=0))   # [0., 1., 2.]
print(torch.channel_normalize(x, m, s, dim=-1))  # Equivalent

If dims is provided, dim is selected from the output layout:

m, s = (1., 2., 3.), (1., 1., 1.)
x = torch.tensor([[1, 2, 3]])
# Provided 3 values to normalize the last dimension
# with length 1, only the first value will be taken
print(torch.channel_normalize(x, m, s, dims=(1, 0)))  # [[0.], [1.], [2.]]
Parameters:
  • input (dragon.vm.torch.Tensor) – The input tensor.
  • mean (Sequence[float], required) – The mean to subtract.
  • std (Sequence[float], required) – The standard deviation to divide.
  • dim (int, optional, default=-1) – The dimension to normalize.
  • dtype (str, optional, default='float32') – The output data type.
  • dims (Sequence[int], optional) – The order of output dimensions.
Returns:

dragon.vm.torch.Tensor – The output tensor.