fused_batch_norm¶
dragon.vm.tensorflow.nn.
fused_batch_norm
(
x,
scale,
offset,
mean,
variance,
epsilon=0.001,
data_format='NHWC',
is_training=True,
name=None,
exponential_avg_factor=1.0
)[source]¶Apply the batch normalization. [Ioffe & Szegedy, 2015].
The normalization is defined as:
\[y = \frac{x - \mathrm{E}[x]} {\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta \]The moving average of stats are calculated as:
\[x_{\text{moving}} = \text{momentum} * x_{\text{moving}} + + (1 - \text{momentum}) * x_{\text{batch}} \]- Parameters:
- x (dragon.Tensor) – The input tensor.
- scale (dragon.Tensor) – The \(\gamma\) tensor.
- offset (dragon.Tensor) – The \(\beta\) tensor.
- mean (dragon.Tensor) – The running mean tensor.
- variance (dragon.Tensor) – The running variance tensor.
- epsilon (float, optional, default=1e-3) – The value to \(\epsilon\).
- data_format (str, optional, default='NHWC') –
'NCHW'
or'NHWC'
. - is_training (bool, optional, default=True) – The value to indicate training or inference.
- name (str, optional) – The operation name.
- exponential_avg_factor (float, optional, default=1.0) – The value to \(1 - \text{momentum}\).
- Returns:
dragon.Tensor – The output tensor.