backward¶
- dragon.vm.torch.autograd.- backward(
 tensors,
 grad_tensors=None,
 retain_graph=False
 )[source]¶
- Compute the derivatives of tensors w.r.t. graph leaves. - Parameters:
- tensors (Sequence[dragon.vm.torch.Tensor]) – The derivative targets.
- grad_tensors (Sequence[dragon.vm.torch.Tensor], optional) – The gradient of attr:tensors.
- retain_graph (bool, optional, default=False) – Falseto free the graph used to compute grad.
 
 
