BatchNorm1d

class dragon.vm.torch.nn.BatchNorm1d(
  num_features,
  eps=1e-05,
  momentum=0.1,
  affine=True,
  track_running_stats=True
)[source]

Apply the batch normalization over 2d input. [Ioffe & Szegedy, 2015].

The normalization is defined as:

y=xE[x]Var[x]+ϵγ+βy = \frac{x - \mathrm{E}[x]} {\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta

The running average of statistics are calculated as:

xrunning=(1momentum)xrunning+momentumxbatchx_{\text{running}} = (1 - \text{momentum}) * x_{\text{running}} + \text{momentum} * x_{\text{batch}}

__init__

BatchNorm1d.__init__(
  num_features,
  eps=1e-05,
  momentum=0.1,
  affine=True,
  track_running_stats=True
)[source]

Create a BatchNorm1d module.

Parameters:
  • num_features (int) The number of channels.
  • eps (float, optional, default=1e-5) The value to ϵ\epsilon.
  • momentum (float, optional, default=0.1) The value to momentum\text{momentum}.
  • affine (bool, optional, default=True) True to apply an affine transformation.
  • track_running_stats (bool, optional, default=True) True to using stats when switching to eval.