all_reduce

dragon.vm.torch.distributed.all_reduce(
  tensor,
  op='sum',
  group=None
)[source]

Reduce the tensor across all nodes in a group.

Parameters:
  • tensor (dragon.vm.torch.Tensor) The tensor to reduce.
  • op (str, optional) The reduction op.
  • group (ProcessGroup, optional) The group for communication.
Returns:

dragon.vm.torch.Tensor The output tensor.