layer_norm

dragon.nn.layer_norm(
  inputs,
  axis=-1,
  eps=1e-05,
  **kwargs
)[source]

Apply the layer normalization. [Ba et.al, 2016]

The normalization is defined as:

\[\text{out} = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta \]

Note that the number of inputs should be 3, i.e., this operators is implemented into the fused version.

However, you can still fix the gamma and beta, by disabling the their gradients directly.

Parameters:
  • inputs (Sequence[dragon.Tensor]) – The tensor x, gamma and beta.
  • axis (int, optional, default=-1) – The channel axis.
  • eps (float, optional, default=1e-5) – The value of \(\epsilon\).
Returns:

dragon.Tensor – The output tensor.