channel_norm¶
- dragon.vm.torch.nn.functional.- channel_norm(
 input,
 mean,
 std,
 dim=- 1,
 dtype='float32',
 dims=None
 )[source]¶
- Apply the normalization to each channel of input. - dimcan be negative:- m = s = (1., 1., 1.) x = torch.tensor([1, 2, 3]) print(nn.functional.channel_norm(x, m, s, dim=0)) # [0., 1., 2.] print(nn.functional.channel_norm(x, m, s, dim=-1)) # Equivalent - If - dimsprovided,- dimis selected from the output layout:- m, s = (1., 2., 3.), (1., 1., 1.) x = torch.tensor([[1, 2, 3]]) # Provided 3 values to normalize the last dimension # with length 1, only the first value will be taken print(nn.functional.channel_norm(x, m, s, dims=(1, 0))) # [[0.], [1.], [2.]] - Parameters:
- input (dragon.vm.torch.Tensor) – The input tensor.
- mean (Sequence[float], required) – The mean to subtract.
- std (Sequence[float], required) – The standard deviation to divide.
- dim (int, optional, default=-1) – The channel dimension.
- dtype (str, optional, default='float32') – The output data type.
- dims (Sequence[int], optional) – The order of output dimensions.
 
 - Returns:
- dragon.vm.torch.Tensor – The output tensor. 
 
