ConvTranspose2d

class dragon.vm.torch.nn.ConvTranspose2d(
  in_channels,
  out_channels,
  kernel_size,
  stride=1,
  padding=0,
  output_padding=0,
  groups=1,
  bias=True,
  dilation=1
)[source]

Apply the 2d deconvolution.

Examples:

m = torch.nn.ConvTranspose2d(2, 3, 2, stride=2)
x = torch.ones(2, 2, 1, 1)
y = m(x)

__init__

ConvTranspose2d.__init__(
  in_channels,
  out_channels,
  kernel_size,
  stride=1,
  padding=0,
  output_padding=0,
  groups=1,
  bias=True,
  dilation=1
)[source]

Create a ConvTranspose2d module.

Parameters:
  • in_channels (int) The number of input channels.
  • out_channels (int) The number of output channels.
  • kernel_size (Union[int, Sequence[int]]) The size of convolution window.
  • stride (Union[int, Sequence[int]], optional, default=1) The stride of convolution window.
  • padding (Union[int, Sequence[int]], optional, default=0) The zero padding size.
  • output_padding (int, optional, default=1) The additional size added to the output shape.
  • groups (int, optional, default=1) The number of groups to split channels into.
  • bias (bool, optional, default=True) Add a bias tensor to output or not.
  • dilation (Union[int, Sequence[int]], optional, default=1) The rate of dilated convolution.