KLDivLoss¶
- class dragon.vm.torch.nn.KLDivLoss(
 size_average=None,
 reduce=None,
 reduction='mean',
 log_target=False
 )[source]¶
- Compute the Kullback-Leibler divergence. - Examples: - m = torch.nn.KLDivLoss() eps = 1e-12 # Epsilon to avoid log(0) # Compute KL(P || Q) q = torch.tensor([0.0, 0.1, 0.2, 0.3, 1.0]) p = torch.tensor([0.0, 0.3, 0.2, 0.1, 0.9]) loss = m(torch.log(torch.clamp(q, eps)), torch.clamp(p, eps)) - See also 
__init__¶
- KLDivLoss.- __init__(
 size_average=None,
 reduce=None,
 reduction='mean',
 log_target=False
 )[source]¶
- Create a - KDivLossmodule.- Parameters:
- size_average (bool, optional) – Trueto set thereductionto ‘mean’.
- reduce (bool, optional) – Trueto set thereductionto ‘sum’ or ‘mean’.
- reduction ({'none', 'batchmean', 'mean', 'sum'}, optional) – The reduce method.
- log_target (bool, optional, default=False) – The flag indicating whether targetis passed in log space.
 
- size_average (bool, optional) – 
 
