vm.torch.nn¶
Classes¶
class AdaptiveAvgPool1d : Apply the 1d adaptive average pooling.
class AdaptiveAvgPool2d : Apply the 2d adaptive average pooling.
class AdaptiveAvgPool3d : Apply the 3d adaptive average pooling.
class AdaptiveMaxPool1d : Apply the 1d adaptive max pooling.
class AdaptiveMaxPool2d : Apply the 2d adaptive max pooling.
class AdaptiveMaxPool3d : Apply the 3d adaptive max pooling.
class Affine : Apply the affine transformation.
class AvgPool1d : Apply the 1d average pooling.
class AvgPool2d : Apply the 2d average pooling.
class AvgPool3d : Apply the 3d average pooling.
class BatchNorm1d : Apply the batch normalization over 2d input. [Ioffe & Szegedy, 2015].
class BatchNorm2d : Apply the batch normalization over 3d input. [Ioffe & Szegedy, 2015].
class BatchNorm3d : Apply the batch normalization over 4d input. [Ioffe & Szegedy, 2015].
class BCEWithLogitsLoss : Compute the sigmoid cross entropy with contiguous targets.
class ChannelShuffle : Apply group shuffle to each channel. [Zhang et.al, 2017].
class ConstantPad1d : Pad input according to the last dimension with a constant.
class ConstantPad2d : Pad input according to the last 2-dimensions with a constant.
class ConstantPad2d : Pad input according to the last 3-dimensions with a constant.
class Conv1d : Apply the 1d convolution.
class Conv2d : Apply the 2d convolution.
class Conv3d : Apply the 3d convolution.
class ConvTranspose1d : Apply the 1d deconvolution.
class ConvTranspose2d : Apply the 2d deconvolution.
class ConvTranspose3d : Apply the 3d deconvolution.
class CosineSimilarity : Compute the softmax cross entropy.
class CrossEntropyLoss : Compute the softmax cross entropy.
class CTCLoss : Compute the ctc loss. [Graves & Gomez, 2006].
class DepthwiseConv2d : Apply the 2d depthwise convolution. [Chollet, 2016].
class DropBlock2d : Set the spatial blocks to zero randomly. [Ghiasi et.al, 2018].
class Dropout : Set the elements to zero randomly. [Srivastava et.al, 2014].
class DropPath : Set the examples over input to zero randomly. [Larsson et.al, 2016].
class ELU : Apply the exponential linear unit. [Clevert et.al, 2015].
class Flatten : Flatten the dimensions of input.
class GELU : Apply the gaussian error linear unit. [Hendrycks & Gimpel, 2016].
class GroupNorm : Apply the group normalization. [Wu & He, 2018].
class GRU : Apply a multi-layer gated recurrent unit (GRU) RNN. [Cho et.al, 2014].
class GumbelSoftmax : Apply the gumbel softmax with a temperature. [Jang et.al, 2016].
class Hardsigmoid : Apply the hard sigmoid function.
class Hardswish : Apply the hard swish function. [Howard et.al, 2019].
class Identity : Apply the identity transformation.
class KLDivLoss : Compute the Kullback-Leibler divergence.
class L1Loss : Compute the element-wise absolute value difference.
class LeakyReLU : Apply the leaky rectified linear unit.
class Linear : Apply the linear transformation.
class LayerNorm : Apply the layer normalization. [Ba et.al, 2016]
class LocalResponseNorm : Apply the local response normalization. [Krizhevsky et.al, 2012].
class LogSoftmax : Apply the composite of logarithm and softmax.
class LSTM : Apply a multi-layer long short-term memory (LSTM) RNN. [Hochreiter & Schmidhuber, 1997].
class LSTMCell : Apply a long short-term memory (LSTM) cell. [Hochreiter & Schmidhuber, 1997].
class MaxPool1d : Apply the 1d max pooling.
class MaxPool2d : Apply the 2d max pooling.
class MaxPool3d : Apply the 3d max pooling.
class Module : The base class of modules.
class ModuleList : The list module container.
class MSELoss : Compute the element-wise squared error.
class MultiheadAttention : Apply the multihead attention. [Vaswani et.al, 2017].
class NLLLoss : Compute the negative likelihood loss.
class Parameter : A wrapped tensor considered to be a module parameter.
class PixelShuffle : Rearrange depth elements into pixels.
class PixelUnshuffle : Rearrange pixels into depth elements.
class PReLU : Apply the parametric rectified linear unit. [He et.al, 2015].
class ReflectionPad1d : Pad input according to the last dimension by reflecting boundary.
class ReflectionPad2d : Pad input according to the last 2-dimensions by reflecting boundary.
class ReflectionPad3d : Pad input according to the last 3-dimensions by reflecting boundary.
class ReLU : Apply the rectified linear unit. [Nair & Hinton, 2010].
class ReLU6 : Apply the clipped-6 rectified linear unit. [Krizhevsky, 2010].
class ReplicationPad1d : Pad input according to the last dimension by replicating boundary.
class ReplicationPad2d : Pad input according to the last 2-dimensions by replicating boundary.
class ReplicationPad3d : Pad input according to the last 3-dimensions by replicating boundary.
class RNN : Apply a multi-layer Elman RNN. [Elman, 1990].
class SELU : Apply the scaled exponential linear unit. [Klambauer et.al, 2017].
class Sequential : The sequential module container.
class Sigmoid : Apply the sigmoid function.
class SigmoidFocalLoss : Compute the sigmoid focal loss. [Lin et.al, 2017].
class SiLU : Apply the sigmoid linear unit. [Hendrycks & Gimpel, 2016].
class SmoothL1Loss : Compute the element-wise error transited from L1 and L2. [Girshick, 2015].
class Softmax : Apply the softmax function.
class Tanh : Apply the tanh function.
class TransformerDecoder : Standard transformer decoder. [Vaswani et.al, 2017].
class TransformerDecoderLayer : Layer for a standard transformer decoder. [Vaswani et.al, 2017].
class TransformerEncoder : Standard transformer encoder. [Vaswani et.al, 2017].
class TransformerEncoderLayer : Layer for a standard transformer encoder. [Vaswani et.al, 2017].
class SyncBatchNorm : Apply the sync batch normalization over input. [Ioffe & Szegedy, 2015].
class Unfold : Extract the sliding blocks.
class Upsample : Upsample input via interpolating neighborhoods.
class UpsamplingBilinear2d : Upsample input via bilinear interpolating.
class UpsamplingNearest2d : Upsample input via nearest interpolating.
class ZeroPad2d : Pad input according to the last 2-dimensions with zeros.