vm.torch.nn

Classes

class Affine : Apply the affine transformation over input.

class AvgPool2d : Apply the 2d average pooling.

class BatchNorm1d : Apply the batch normalization over 2d input. [Ioffe & Szegedy, 2015].

class BatchNorm2d : Apply the batch normalization over 3d input. [Ioffe & Szegedy, 2015].

class BatchNorm3d : Apply the batch normalization over 4d input. [Ioffe & Szegedy, 2015].

class BCEWithLogitsLoss : Compute the sigmoid cross entropy with contiguous targets.

class ConstantPad1d : Pad input according to the last dimension with a constant.

class ConstantPad2d : Pad input according to the last 2-dimensions with a constant.

class ConstantPad2d : Pad input according to the last 3-dimensions with a constant.

class Conv2d : Apply the 2d convolution.

class ConvTranspose2d : Apply the 2d deconvolution.

class CrossEntropyLoss : Compute the softmax cross entropy with sparse labels.

class CTCLoss : Compute the ctc loss with batched labels. [Graves & Gomez, 2006].

class DepthwiseConv2d : Apply the 2d depthwise convolution. [Chollet, 2016].

class DropBlock2d : Set the spatial blocks to zero randomly. [Ghiasi et.al, 2018].

class Dropout : Set the elements to zero randomly. [Srivastava et.al, 2014].

class DropPath : Set the examples over input to zero randomly. [Larsson et.al, 2016].

class ELU : Apply the exponential linear unit. [Clevert et.al, 2015].

class Flatten : Flatten the dimensions of input.

class GroupNorm : Apply the group normalization. [Wu & He, 2018].

class GRU : Apply a multi-layer gated recurrent unit (GRU) RNN. [Cho et.al, 2014].

class GumbelSoftmax : Apply the gumbel softmax with a temperature. [Jang et.al, 2016].

class L1Loss : Compute the element-wise absolute value difference.

class LeakyReLU : Apply the leaky rectified linear unit.

class Linear : Apply the linear transformation.

class LocalResponseNorm : Apply the local response normalization. [Krizhevsky et.al, 2012].

class LogSoftmax : Apply the composite of logarithm and softmax.

class LSTM : Apply a multi-layer long short-term memory (LSTM) RNN. [Hochreiter & Schmidhuber, 1997].

class LSTMCell : Apply a long short-term memory (LSTM) cell. [Hochreiter & Schmidhuber, 1997].

class MaxPool2d : Apply the 2d MaxPool2d pooling.

class Module : The base class of modules.

class MSELoss : Compute the element-wise squared error.

class NLLLoss : Compute the negative likelihood loss with sparse labels.

class Parameter : A wrapped tensor considered to be a module parameter.

class PReLU : Apply the parametric rectified linear unit. [He et.al, 2015].

class ReflectionPad1d : Pad input according to the last dimension by reflecting boundary.

class ReflectionPad2d : Pad input according to the last 2-dimensions by reflecting boundary.

class ReflectionPad3d : Pad input according to the last 3-dimensions by reflecting boundary.

class ReLU : Apply the rectified linear unit. [Nair & Hinton, 2010].

class ReLU6 : Apply the clipped-6 rectified linear unit. [Krizhevsky, 2010].

class ReplicationPad1d : Pad input according to the last dimension by replicating boundary.

class ReplicationPad2d : Pad input according to the last 2-dimensions by replicating boundary.

class ReplicationPad3d : Pad input according to the last 3-dimensions by replicating boundary.

class RNN : Apply a multi-layer Elman RNN. [Elman, 1990].

class SELU : Apply the scaled exponential linear unit. [Klambauer et.al, 2017].

class Sigmoid : Apply the sigmoid function.

class SigmoidFocalLoss : Compute the sigmoid focal loss with sparse labels. [Lin et.al, 2017].

class SmoothL1Loss : Compute the element-wise error transited from L1 and L2. [Girshick, 2015].

class Softmax : Apply the softmax function.

class Tanh : Apply the tanh function.

class SyncBatchNorm : Apply the sync batch normalization over input. [Ioffe & Szegedy, 2015].

class Upsample : Upsample input via interpolating neighborhoods.

class UpsamplingBilinear2d : Upsample input via bilinear interpolating.

class UpsamplingNearest2d : Upsample input via nearest interpolating.

class ZeroPad2d : Pad input according to the last 2-dimensions with zeros.