LocalResponseNorm

class dragon.vm.torch.nn.LocalResponseNorm(
  size,
  alpha=0.0001,
  beta=0.75,
  k=1.0
)[source]

Apply the local response normalization. [Krizhevsky et.al, 2012].

The normalization is defined as:

\[y_{i} = x_{i}\left(k + \frac{\alpha}{n} \sum_{j=\max(0, i-n/2)}^{\min(N-1,i+n/2)}x_{j}^2 \right)^{-\beta} \]

Examples:

m = torch.nn.LocalResponseNorm(5)
x = torch.randn(2, 4)
y = m(x)

__init__

LocalResponseNorm.__init__(
  size,
  alpha=0.0001,
  beta=0.75,
  k=1.0
)[source]

Create a GroupNorm module.

Parameters:
  • size (int, required) – The number of neighbouring channels to sum over.
  • alpha (float, optional, default=0.0001) – The scale value \(\alpha\).
  • beta (float, optional, default=0.75) – The exponent value \(\beta\).
  • k (float, optional, default=1.) – The bias constant \(k\).