backward

dragon.vm.torch.autograd.backward(
  tensors,
  grad_tensors=None,
  retain_graph=False
)[source]

Compute the derivatives of tensors w.r.t. graph leaves.

Parameters:
  • tensors (Sequence[dragon.vm.torch.Tensor]) – The derivative targets.
  • grad_tensors (Sequence[dragon.vm.torch.Tensor], optional) – The optional gradient of tensors.
  • retain_graph (bool, optional, default=False) – False to free the graph used to compute grad.