l1_l2

dragon.vm.tensorflow.keras.regularizers.l1_l2(
  l1=0.01,
  l2=0.01
)[source]

Create a L1L2 regularizer.

The L1L2 regularizer is defined as:

\[loss_{reg} = loss + \alpha|w| + \frac{\beta}{2}|w|_{2} \]
Parameters:
  • l1 (float, optional, default=0.01) The value to \(\alpha\).
  • l2 (float, optional, default=0.01) The value to \(\beta\).
Returns:

dragon.vm.tensorflow.keras.regularizers.Regularizer The regularizer.