softmax_cross_entropy_loss

dragon.losses.softmax_cross_entropy_loss(
  inputs,
  axis=- 1,
  ignore_index=None,
  reduction='valid',
  **kwargs
)[source]

Compute the loss of softmax cross entropy.

Both sparse or dense targets are supported:

x = dragon.constant([[0.5, 0.5], [0.3, 0.7]])
y1 = dragon.constant([1, 0])
y2 = dragon.constant([[0., 1., ], [1., 0.]])
print(dragon.losses.softmax_cross_entropy_loss([x, y1]))  # 0.8030813
print(dragon.losses.softmax_cross_entropy_loss([x, y2]))  # Equivalent
Parameters:
  • inputs (Sequence[dragon.Tensor]) – The tensor input and target.
  • axis (int, optional, default=-1) – The axis to compute softmax.
  • ignore_index (int, optional) – The ignored value of target.
  • reduction ({'none', 'sum', 'mean', 'valid'}, optional) – The reduction method.
Returns:

dragon.Tensor – The output tensor.