upsample

dragon.vm.torch.nn.functional.upsample(
  input,
  size=None,
  scale_factor=None,
  mode='nearest',
  align_corners=False
)[source]

Upsample input via interpolating neighborhoods.

Specify either size or scale_factor to compute output size:

x = torch.ones((1, 2, 3, 4))
y = F.upsample(x, size=6)  # Shape: (1, 2, 6, 6)
z = F.upsample(x, scale_factor=2)  # Shape: (1, 2, 6, 8)

Set align_corners to determine the input coordinates in linear mode:

# align_corners = False
# Use half-pixel transformation
scale = float(in_size) / float(out_size)
in_coord = (out_coord + 0.5) * scale - 0.5

# align_corners = True
# Use align-corners transformation
scale = float(in_size - 1) / float(out_size - 1)
in_coord = out_coord * scale
Parameters:
  • input (dragon.vm.torch.Tensor) – The input tensor.
  • size (Union[int, Sequence[int]], optional) – The output size.
  • scale_factor (Union[number, Sequence[number]], optional) – The scale factor along each input dimension.
  • mode ({'nearest', 'linear'}, optional) – The interpolation mode.
  • align_corners (bool, optional, default=False) – Whether to align corners in linear interpolating.
Returns:

dragon.vm.torch.Tensor – The output tensor.