RMSprop

class dragon.vm.tensorflow.keras.optimizers.RMSprop(
  learning_rate=0.001,
  rho=0.9,
  momentum=0.0,
  epsilon=1e-07,
  name=None,
  **kwargs
)[source]

The optimizer to apply RMSprop algorithm. [Hinton et.al, 2013].

The RMSprop update is defined as:

\[\text{RMSprop}(g) = \text{lr} * m_{t} \\ \quad \\ \text{where}\quad \begin{cases} v_{t} = \alpha * v_{t-1} + (1 - \alpha) * g^{2} \\ m_{t} = \text{momentum} * m_{t-1} + \frac{g}{\sqrt{v_{t}} + \epsilon} \end{cases} \]

__init__

RMSprop.__init__(
  learning_rate=0.001,
  rho=0.9,
  momentum=0.0,
  epsilon=1e-07,
  name=None,
  **kwargs
)[source]

Create a RMSprop optimizer.

Parameters:
  • learning_rate (float, optional, default=0.001) The initial value to \(\text{lr}\).
  • rho (float, optional, default=0.9) The initial value to \(\alpha\).
  • momentum (float, optional, default=0) The initial value to \(\text{momentum}\).
  • epsilon (float, optional, default=1e-7) The initial value to \(\epsilon\).
  • name (str, optional) The optional optimizer name.

Properties

iterations

Optimizer.iterations

Return the number of steps has run.

Returns:
int The iterations.

Methods

apply_gradients

Optimizer.apply_gradients(grads_and_vars)[source]

Apply the gradients to update variables.

Parameters:
  • grads_and_vars (Sequence[Sequence[dragon.Tensor]]) The gradients and variables.
Returns:

dragon.vm.tensorflow.keras.optimizers.Optimizer The self to generate the update operations.