Parameter

class dragon.vm.torch.nn.Parameter(
  tensor,
  requires_grad=True
)[source]

A wrapped tensor considered to be a module parameter.

Use this class to wrap a leaf tensor to be a parameter, that can be identified by torch.nn.Module:

param = torch.nn.Parameter(torch.ones(2, 3))

Typically, the gradient of a parameter should be computed, while you can set requires_grad to False to ignore. Froze a parameter from updating can be directly implemented by ignoring the it’s gradient:

param = torch.nn.Parameter(torch.ones(2, 3), requires_grad=False)

__init__

Parameter.__init__(
  tensor,
  requires_grad=True
)[source]

Create a Parameter.

Parameters:
  • tensor (dragon.vm.torch.Tensor) The tensor to be wrapped.
  • requires_grad (bool, optional, default=True) Whether to compute the gradient if necessary.