all_reduce

dragon.vm.torch.distributed.all_reduce(
  tensor,
  op='SUM',
  group=None
)[source]

Reduce the tensor across all nodes in a group.

Parameters:
  • tensor (Sequence[dragon.vm.torch.Tensor]) – The tensor(s) to reduce.
  • op ({'SUM', 'MEAN'}, optional) – The reduce operation.
  • group (ProcessGroup, optional) – The group for communication.
Returns:

dragon.vm.torch.Tensor – The output tensor.