RNN

class dragon.vm.torch.nn.RNN(
  input_size,
  hidden_size,
  nonlinearity='relu',
  num_layers=1,
  bias=True,
  batch_first=False,
  dropout=0,
  bidirectional=False
)[source]

Apply a multi-layer Elman RNN. [Elman, 1990].

Examples:

m = torch.nn.RNN(32, 64)
x = torch.ones(8, 32, 256)
outputs, hidden = m(x)

__init__

RNN.__init__(
  input_size,
  hidden_size,
  nonlinearity='relu',
  num_layers=1,
  bias=True,
  batch_first=False,
  dropout=0,
  bidirectional=False
)[source]

Create a RNN module.

Parameters:
  • input_size (int) – The dimension of input.
  • hidden_size (int) – The dimension of hidden state.
  • nonlinearity ({'tanh', 'relu'}, optional) – The nonlinearity.
  • num_layers (int, optional, default=1) – The number of recurrent layers.
  • bias (bool, optional, default=True) – True to use bias.
  • batch_first (bool, optional, default=False) – True to use order [N, T, C] otherwise [T, N, C].
  • dropout (number, optional, default=0) – The dropout ratio.
  • bidirectional (bool, optional, default=False) – Whether to create a bidirectional rnn.