conv_transpose2d

dragon.vm.torch.nn.functional.conv_transpose2d(
  input,
  weight,
  bias=None,
  stride=1,
  padding=0,
  output_padding=0,
  groups=1,
  dilation=1
)[source]

Apply the 2d deconvolution to input.

The spatial output dimension is computed as:

\[\begin{cases} \text{DK}_{size} = dilation * (\text{K}_{size} - 1) + 1 \\ \text{Dim}_{out} = (\text{Dim}_{in} - 1) * stride + \text{DK}_{size} - 2 * pad \end{cases} \]
Parameters:
  • input (dragon.vm.torch.Tensor) – The input tensor.
  • weight (dragon.vm.torch.Tensor) – The weight tensor.
  • bias (dragon.vm.torch.Tensor, optional) – The optional bias.
  • stride (Union[int, Sequence[int]], optional, default=1) – The stride of sliding window.
  • padding (Union[int, Sequence[int]], optional, default=0) – The zero-padding size.
  • output_padding (int, optional, default=1) – The additional padding size.
  • groups (int, optional, default=1) – The number of groups to split input channels.
  • dilation (Union[int, Sequence[int]], optional, default=1) – The rate of dilated kernel.
Returns:

dragon.vm.torch.Tensor – The output tensor.