conv2d

dragon.vm.torch.nn.functional.conv2d(
  input,
  weight,
  bias=None,
  stride=1,
  padding=0,
  dilation=1,
  groups=1
)[source]

Apply the 2d convolution to input.

The spatial output dimension is computed as:

\[\begin{cases} \text{DK}_{size} = dilation * (\text{K}_{size} - 1) + 1 \\ \text{Dim}_{out} = (\text{Dim}_{in} + 2 * pad - \text{DK}_{size}) / stride + 1 \end{cases} \]
Parameters:
  • input (dragon.vm.torch.Tensor) – The input tensor.
  • weight (dragon.vm.torch.Tensor) – The weight tensor.
  • bias (dragon.vm.torch.Tensor, optional) – The optional bias tensor.
  • stride (Union[int, Sequence[int]], optional, default=1) – The stride of sliding window.
  • padding (Union[int, Sequence[int]], optional, default=0) – The zero-padding size.
  • dilation (Union[int, Sequence[int]], optional, default=1) – The rate of dilated kernel.
  • groups (int, optional, default=1) – The number of groups to split input channels.
Returns:

dragon.vm.torch.Tensor – The output tensor.