dragon.ops

Data

List Brief
LMDBData Prefetch Image data with LMDB database.
ImageData Process the images from 4D raw data.

Initializer

List Brief
Fill Fill a Tensor with a specific value.
RandomUniform Randomly initialize a Tensor with Uniform distribution.
RandomNormal Randomly initialize a Tensor with Normal distribution.
TruncatedNormal Randomly initialize a Tensor with Truncated Normal distribution.
GlorotUniform Randomly initialize a Tensor with Xavier Uniform distribution.
GlorotNormal Randomly initialize a Tensor with Kaiming Normal distribution.

Vision

List Brief
Conv2d 2d Convolution.
DepthwiseConv2d Depthwise 2d Convolution. [Chollet, 2016].
Conv2dTranspose 2d Deconvolution.
Pool2d 2d Pooling, MAX or AVG.
ROIPool RoI Pooling (MAX). [Girshick, 2015].
ROIAlign RoI Align (AVG). [He et.al, 2017].
LRN Local Response Normalization. [Krizhevsky et.al, 2012].
NNResize Resize the image with Nearest-Neighbor method.
BilinearResize Resize the image with Bi-Linear method.
BiasAdd Add the bias across channels to a NCHW or NHWC input.
DropBlock2d Randomly drop the outputs according to the spatial blocks. [Ghiasi et.al, 2018].

Recurrent

List Brief
RNN Multi-layer Elman-RNN with TanH or ReLU non-linearity. [Elman, 1990].
LSTM Multi-layer Long Short-Term Memory(LSTM) RNN. [Hochreiter & Schmidhuber, 1997].
GRU Multi-layer Gated Recurrent Unit (GRU) RNN. [Cho et.al, 2014].
LSTMCell Single-layer Long Short-Term Memory(LSTM) Cell. [Hochreiter & Schmidhuber, 1997].

Activation

List Brief
Sigmoid Sigmoid function.
Tanh TanH function.
Relu Rectified Linear Unit function. [Nair & Hinton, 2010].
LRelu Leaky Rectified Linear Unit function.
PRelu Parametric Rectified Linear Unit function. [He et.al, 2015].
Elu Exponential Linear Unit function. [Clevert et.al, 2015].
SElu Scaled Exponential Linear Unit function. [Klambauer et.al, 2017].
Softmax Softmax function.
Dropout Randomly set a unit into zero. [Srivastava et.al, 2014].
DropPath Randomly set a example of batch into zero. [Larsson et.al, 2016].

Loss

List Brief
NLLLoss Compute the negative likelihood loss with sparse labels.
SparseSoftmaxCrossEntropy Compute the softmax cross entropy with sparse labels.
SigmoidCrossEntropy Compute sigmoid cross entropy with given logits and targets.
SoftmaxCrossEntropy Compute the softmax cross entropy with given logits and one-hot labels.
SmoothL1Loss Compute the smoothed L1 loss. [Girshick, 2015].
L1Loss Compute the L1 loss.
L2Loss Compute the L2 loss.
SigmoidFocalLoss Compute the sigmoid focal loss with sparse labels. [Lin et.al, 2017].
SoftmaxFocalLoss Compute the softmax focal loss with sparse labels. [Lin et.al, 2017].
CTCLoss Compute the ctc loss with batched variable length of labels. [Graves & Gomez, 2006].

Arithmetic

List Brief
Add Calculate A + B.
Sub Calculate A - B.
Mul Calculate A * B.
Div Calculate A / B.
Dot Calculate the vector dot.
Pow Calculate the power of input.
Log Calculate the logarithm of input.
Exp Calculate the exponential of input.
Square Calculate the square of input.
Sqrt Calculate the sqrt of input.
Maximum Return the max value of given two inputs.
Minimum Return the min value of given two inputs.
Clip Clip the input to be between lower and higher bounds.
Matmul Matrix Multiplication.
FullyConnected Calculate Y = X * W’ + b.
Eltwise Element-wise Sum or Product the arbitrary number of inputs.
Affine Calculate Y = Ax + b along the given range of axes.
GramMatrix Calculate the gram matrix. [Gatys et.al, 2016].
Moments Calculate the mean and variance of inputs along the given axes.
Accumulate Calculate y = alpha * x + beta * y
MovingAverage Calculate the y = (1 - decay) * x + decay * y

Normalization

List Brief
BatchNorm Batch Normalization. [Ioffe & Szegedy, 2015].
GroupNorm Group Normalization. [Wu & He, 2018].
LayerNorm Layer Normalization. [Ba et.al, 2016]
InstanceNorm Instance Normalization. [Ulyanov et.al, 2016].
L2Norm L2 Normalization. [Liu et.al, 2015].

Array

List Brief
Where Select elements from either x or y.
IndexSelect Select the elements according to the indices along the given axis.
MaskedSelect Select the the elements where mask is 1.
Reduce Reduce the inputs along the axis in given axes.
Sum Compute the sum along the given axis.
Mean Compute the mean along the given axis.
Max Compute the values of maximum elements along the given axis.
ArgMax Compute the indices of maximum elements along the given axis.
Min Compute the values of minimum elements along the given axis.
ArgMin Compute the indices of minimum elements along the given axis.
Slice Slice the inputs into several parts along the given axis.
Stack Stack the inputs along the given axis.
Concat Concatenate the inputs along the given axis.
ChannelShuffle Shuffle channels between groups along the given axis. [Zhang et.al, 2017].
Repeat Repeat the input along the given axis.
Transpose Transpose the input according to the given permutations.
Tile Tile the input according to the given multiples.
Pad Pad the input according to the given sizes.
Crop Crop the input according to the given starts and sizes.
OneHot Generate the one-hot representation of inputs.
Flatten Flatten the input along the given axes.
Reshape Reshape the dimensions of input.
Squeeze Remove the dimensions with size 1.
ExpandDims Expand the new dimension with size 1 to specific axis.
Shape Get the dynamic shape of a Tensor.
NonZero Return the indices of non-zero elements.
Arange Return evenly spaced values within a given interval.
Multinomial Return indices sampled from the multinomial distribution.

Control Flow

List Brief
Copy Copy the value to ref.
Assign Assign the value to ref.
MaskedAssign Assign the value to ref where mask is 1.
Equal Equal Comparing between A and B.
NotEqual NotEqual Comparing between A and B.
Less Less Comparing between A and B.
LessEqual LessEqual Comparing between A and B.
Greater Greater Comparing between A and B.
GreaterEqual GreaterEqual Comparing between A and B.

Misc

List Brief
Cast Cast the data type of inputs to a specific one.
Run Run a custom operator. (Without GradientFlow)
Template Run a custom operator. (With GradientFlow)
Accuracy Calculate the Top-K accuracy.
StopGradient Return the identity of input with truncated gradient flow.

Contrib

List Brief
Proposal Generate Regional Proposals. [Ren et.al, 2015].

MPI

List Brief
MPIBroadcast Broadcast a tensor to all nodes in the MPIGroup.
MPIGather Gather a tensor from all nodes to root in the MPIGroup.