Activation

dragon.operators.activation.Relu(inputs, **kwargs)

Rectified Linear Unit function. [Nair & Hinton, 2010].

Type Constraints: (float16, float32)

Parameters:inputs (Tensor) – The input tensor.
Returns:The output tensor, calculated as: \(\, y = \left\{ \begin{array} \\ x & & (x > 0) \\ 0 & & (x <= 0) \\ \end{array} \right.\).
Return type:Tensor
dragon.operators.activation.LRelu(inputs, slope=0.2, **kwargs)

Leaky Rectified Linear Unit function.

Type Constraints: (float16, float32)

Parameters:
  • inputs (Tensor) – The input tensor.
  • slope (float) – The slope of negative side.
Returns:

The output tensor, calculated as: \(\, y = \left\{ \begin{array} \\ x & & (x > 0) \\ Slope * x & & (x <= 0) \\ \end{array} \right.\).

Return type:

Tensor

dragon.operators.activation.PRelu(inputs, channel_shared=False, data_format='NCHW', **kwargs)

Parametric Rectified Linear Unit function. [He et.al, 2015].

Type Constraints: float32

Parameters:
  • inputs (sequence of Tensor) – The input and trainable parameter(slope).
  • channel_shared (bool) – Whether to share the parameter(slope) across channels.
  • data_format (str) – The data format, NCHW or NHWC.
Returns:

The output tensor, calculated as: \(\, y_{i} = \left\{ \begin{array} \\ x_{i} & & (x_{i} > 0) \\ \alpha_{i} * x_{i} & & (x <= 0) \\ \end{array} \right.\)

Return type:

Tensor

dragon.operators.activation.Elu(inputs, alpha=1.0, **kwargs)

Exponential Linear Unit function. [Clevert et.al, 2015].

Type Constraints: (float16, float32)

Parameters:
  • inputs (Tensor) – The input tensor.
  • alpha (float) – The alpha.
Returns:

The output tensor, calculated as: \(\, y = \left\{ \begin{array} \\ x & & (x > 0) \\ Alpha * (e^{x} - 1) & & (x <= 0) \\ \end{array} \right.\)

Return type:

Tensor

dragon.operators.activation.SElu(inputs, **kwargs)

Scaled Exponential Linear Unit function. [Klambauer et.al, 2017].

Type Constraints: (float16, float32)

Parameters:inputs (Tensor) – The input tensor.
Returns:The output tensor, calculated as: \(\, y = 1.0507 \left\{ \begin{array} \\ x & & (x > 0) \\ 1.6733 * (e^{x} - 1) & & (x <= 0) \\ \end{array} \right.\)
Return type:Tensor
dragon.operators.activation.Sigmoid(inputs, **kwargs)

Sigmoid function.

Type Constraints: (float16, float32)

Parameters:inputs (Tensor) – The input tensor.
Returns:The output tensor, calculated as: \(\, y = \frac{1}{1 + {e}^{-x}}\).
Return type:Tensor
dragon.operators.activation.Tanh(inputs, **kwargs)

Tanh function.

Type Constraints: (float16, float32)

Parameters:inputs (Tensor) – The input tensor.
Returns:The output tensor, calculated as: \(\, y = \frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}\).
Return type:Tensor
dragon.operators.activation.Dropout(inputs, prob=0.5, scale=True, **kwargs)

Randomly set a unit into zero. [Srivastava et.al, 2014].

Type Constraints: (float16, float32)

Parameters:
  • inputs (Tensor) – The input tensor.
  • prob (float or Tensor, optional, default=0.2) – The prob of dropping.
  • scale (bool) – Whether to scale the output during training.
Returns:

The output tensor, calculated as: \(\, y = x * Bernoulli(p=1 - prob)\).

Return type:

Tensor

dragon.operators.activation.DropPath(inputs, prob=0.2, increment=0.0, **kwargs)

Randomly set a example of batch into zero. [Larsson et.al, 2016].

Set the increment to schedule prob from 0 after each run.

Type Constraints: (float16, float32)

Parameters:
  • inputs (Tensor) – The input tensor.
  • prob (float or Tensor, optional, default=0.2) – The prob of dropping
  • increment (float, optional, default=0.0) – The increment to drop prob.
Returns:

The output tensor.

Return type:

Tensor

dragon.operators.activation.Softmax(inputs, axis=1, **kwargs)

Softmax function.

Type Constraints: (float16, float32)

Parameters:
  • inputs (Tensor) – The input tensor.
  • axis (int) – The axis to apply softmax, can be negative.
Returns:

The output tensor, calculated as: \(\, y = \frac{e^{x_{i}}}{\sum e^{x_{j}}}\).

Return type:

Tensor