lp_normalize

dragon.math.lp_normalize(
  inputs,
  axis=None,
  p=2,
  eps=1e-12,
  reduction='sum',
  **kwargs
)[source]

Apply the lp normalization.

The Lp-Normalization is defined as:

\[\text{out} = \frac{x}{\max(\left\|x\right\|_{p}, \epsilon)} \]

The argument axis could be negative or None:

x = dragon.constant([[1, 2, 3], [4, 5, 6]], 'float32')

# A negative ``axis`` is the last-k axis
print(dragon.math.lp_normalize(x, 1))
print(dragon.math.lp_normalize(x, -1))  # Equivalent

# If ``axis`` is None, the vector-style reduction
# will be applied to compute a norm scalar
print(dragon.math.lp_normalize(x))

# Also, ``axis`` could be a sequence of integers
print(dragon.math.lp_normalize(x, [0, 1]))
Parameters:
  • inputs (dragon.Tensor) – The tensor \(x\).
  • p (int, optional, default=2) – The order of the normalization.
  • axis (Union[int, Sequence[int]], optional) – The axis to compute the norm.
  • eps (float, optional, default=1e-12) – The value of \(\epsilon\).
  • reduction ({'sum', 'mean'}, optional) – The reduction method for norm.
Returns:

dragon.Tensor – The output tensor.