smooth_l1_loss¶
- dragon.vm.torch.nn.functional.- smooth_l1_loss(
 input,
 target,
 beta=1.0,
 size_average=None,
 reduce=None,
 reduction='mean'
 )[source]¶
- Compute the element-wise error transited from L1 and L2. [Girshick, 2015]. - The SmoothL1Loss function is defined as: \[\text{SmoothL1Loss}(x, y) = \begin{cases} 0.5 * (x - y)^{2} / beta, & \text{ if } |x - y| < beta \\ |x - y| - 0.5 * beta, & \text{ otherwise } \end{cases} \]- Parameters:
- input (dragon.vm.torch.Tensor) – The input tensor.
- target (dragon.vm.torch.Tensor) – The target tensor.
- beta (float, optional, default=1.0) – The transition point from L1 to L2 loss.
- size_average (bool, optional) – Whether to average the loss.
- reduce (bool, optional) – Whether to reduce the loss.
- reduction ({'batch_size', 'sum', mean'}, optional) – The reduce method.
 
 - Returns:
- dragon.vm.torch.Tensor – The output tensor. 
 - See also 
