L1L2

class dragon.vm.tensorflow.keras.regularizers.L1L2(
  l1=0.01,
  l2=0.01
)[source]

The L1L2 regularizer.

The L1L2 regularizer is defined as:

\[loss_{reg} = loss + \alpha|w| + \frac{\beta}{2}|w|_{2} \]

__init__

L1L2.__init__(
  l1=0.01,
  l2=0.01
)[source]

Create a L1L2 regularizer.

Parameters:
  • l1 (float, optional, default=0.01) – The value to \(\alpha\).
  • l2 (float, optional, default=0.01) – The value to \(\beta\).