GRU¶
- class
dragon.vm.torch.nn.
GRU
(
input_size,
hidden_size,
num_layers=1,
bias=True,
batch_first=False,
dropout=0,
bidirectional=False
)[source]¶ Apply a multi-layer gated recurrent unit (GRU) RNN. [Cho et.al, 2014].
Examples:
m = torch.nn.GRU(32, 64) x = torch.ones(8, 32, 256) outputs, hidden = m(x)
__init__¶
GRU.
__init__
(
input_size,
hidden_size,
num_layers=1,
bias=True,
batch_first=False,
dropout=0,
bidirectional=False
)[source]¶Create a
GRU
module.- Parameters:
- input_size (int) – The dimension of input.
- hidden_size (int) – The dimension of hidden state.
- num_layers (int, optional, default=1) – The number of recurrent layers.
- bias (bool, optional, default=True) –
True
to use bias. - batch_first (bool, optional, default=False) –
True
to use order [N, T, C] otherwise [T, N, C]. - dropout (number, optional, default=0) – The dropout ratio.
- bidirectional (bool, optional, default=False) – Whether to create a bidirectional gru.