GRU¶
- class
dragon.nn.
GRU
(
input_size,
hidden_size,
num_layers=1,
bidirectional=False,
dropout=0,
**kwargs
)[source]¶ Apply a multi-layer gated recurrent unit (GRU) RNN. [Cho et.al, 2014].
The data format of inputs should be \((T, N, C)\):
t, n, c = 8, 2, 4 m = dragon.nn.GRU(8, 16) x = dragon.constant([t, n, c], 'float32') y = m(x)
__init__¶
GRU.
__init__
(
input_size,
hidden_size,
num_layers=1,
bidirectional=False,
dropout=0,
**kwargs
)[source]¶Create a
GRU
module.- Parameters:
- input_size (int) – The dimension of input.
- hidden_size (int) – The dimension of hidden state.
- num_layers (int, optional, default=1) – The number of recurrent layers.
- bidirectional (bool, optional, default=False) – Whether to create a bidirectional lstm.
- dropout (number, optional, default=0) – The dropout ratio.