Recurrent

class dragon.operators.recurrent.RNN(
   input_size,
   hidden_size,
   nonlinearity='relu',
   num_layers=1,
   bidirectional=False,
   dropout=0,
   name=None
)

Multi-layer Elman-RNN with TanH or ReLU non-linearity. [Elman, 1990].

The data format of inputs should be [T, N, C].

Examples

>>> rnn = RNN(32, 64, num_layers=1, bidirectional=True)
>>> x = Tensor('x').Variable()
>>> outputs, hidden = rnn(x)
__init__(
   input_size,
   hidden_size,
   nonlinearity='relu',
   num_layers=1,
   bidirectional=False,
   dropout=0,
   name=None
)

Construct a RNN instance.

Parameters:
  • input_size (int) – The dimension of inputs.
  • hidden_size (int) – The dimension of hidden/outputs.
  • nonlinearity (str) – The nonlinearity. tanh or relu.
  • num_layers (int) – The number of recurrent layers.
  • bidirectional (bool) – Whether to use bidirectional rnn.
  • dropout (number) – The dropout ratio. 0 means Disabled.
  • name (str, optional) – The optional name for weights.
Returns:

The wrapper of general RNN.

Return type:

RNNBase

class dragon.operators.recurrent.LSTM(
   input_size,
   hidden_size,
   num_layers=1,
   bidirectional=False,
   dropout=0,
   name=None
)

Multi-layer Long Short-Term Memory(LSTM) RNN. [Hochreiter & Schmidhuber, 1997].

The data format of inputs should be [T, N, C].

Examples

>>> rnn = LSTM(32, 64, num_layers=2, bidirectional=True, dropout=0.5)
>>> x = Tensor('x').Variable()
>>> outputs, hidden = rnn(x)
__init__(
   input_size,
   hidden_size,
   num_layers=1,
   bidirectional=False,
   dropout=0,
   name=None
)

Construct a LSTM instance.

Parameters:
  • input_size (int) – The dimension of inputs.
  • hidden_size (int) – The dimension of hidden/outputs.
  • num_layers (int) – The number of recurrent layers.
  • bidirectional (bool) – Whether to use bidirectional rnn.
  • dropout (number) – The dropout ratio. 0 means Disabled.
  • name (str, optional) – The optional name for weights.
Returns:

The wrapper of general RNN.

Return type:

RNNBase

class dragon.operators.recurrent.GRU(
   input_size,
   hidden_size,
   num_layers=1,
   bidirectional=False,
   dropout=0,
   name=None
)

Multi-layer Gated Recurrent Unit (GRU) RNN. [Cho et.al, 2014].

The data format of inputs should be [T, N, C].

Examples

>>> rnn = GRU(32, 64, num_layers=2, bidirectional=False)
>>> x = Tensor('x').Variable()
>>> outputs, hidden = rnn(x)
__init__(
   input_size,
   hidden_size,
   num_layers=1,
   bidirectional=False,
   dropout=0,
   name=None
)

Construct a GRU instance.

Parameters:
  • input_size (int) – The dimension of inputs.
  • hidden_size (int) – The dimension of hidden/outputs.
  • num_layers (int) – The number of recurrent layers.
  • bidirectional (bool) – Whether to use bidirectional rnn.
  • dropout (number) – The dropout ratio. 0 means Disabled.
  • name (str, optional) – The optional name for weights.
Returns:

The wrapper of general RNN.

Return type:

RNNBase

dragon.operators.recurrent.LSTMCell(inputs, **kwargs)

Single-layer Long Short-Term Memory(LSTM) Cell. [Hochreiter & Schmidhuber, 1997].

The data format of inputs should be [N, C].

Parameters:inputs (sequence of Tensor) – The inputs, represent x(4-concatenated) and cx respectively.
Returns:The outputs, h and c respectively.
Return type:sequence of Tensor