Conv2d

class dragon.vm.torch.nn.Conv2d(
  in_channels,
  out_channels,
  kernel_size,
  stride=1,
  padding=0,
  dilation=1,
  groups=1,
  bias=True
)[source]

Apply the 2d convolution.

The spatial output dimension is computed as:

\[\begin{cases} \text{DK}_{size} = dilation * (\text{K}_{size} - 1) + 1 \\ \text{Dim}_{out} = (\text{Dim}_{in} + 2 * pad - \text{DK}_{size}) / stride + 1 \end{cases} \]

Examples:

m = torch.nn.Conv2d(2, 3, 3, padding=1)
x = torch.ones(2, 2, 4, 4)
y = m(x)

__init__

Conv2d.__init__(
  in_channels,
  out_channels,
  kernel_size,
  stride=1,
  padding=0,
  dilation=1,
  groups=1,
  bias=True
)[source]

Create a Conv2d module.

Parameters:
  • in_channels (int) – The number of input channels.
  • out_channels (int) – The number of output channels.
  • kernel_size (Union[int, Sequence[int]]) – The size of convolution kernel.
  • stride (Union[int, Sequence[int]], optional, default=1) – The stride of sliding window.
  • padding (Union[int, Sequence[int]], optional, default=0) – The zero-padding size.
  • dilation (Union[int, Sequence[int]], optional, default=1) – The rate of dilated convolution kernel.
  • groups (int, optional, default=1) – The number of groups to split input channels.
  • bias (bool, optional, default=True) – True to add a bias on the output.