dragon.updaters

Quick Reference

List Brief
SGDUpdater The Momentum-SGD Updater, introduced by [LeCun et.al, 1998].
NesterovUpdater The Nesterov-SGD Updater, introduced by [Sutskever et.al, 2012].
RMSPropUpdater The RMSProp Updater, introduced by [Hinton et.al, 2013].
AdamUpdater The Adam Updater, introduced by [Kingma & Ba, 2014].

API Reference

class dragon.updaters.BaseUpdater(
   scale_gradient=1.0,
   clip_gradient=-1.0,
   l2_decay=-1.0,
   slot=None,
   verbose=True
)

BaseUpdater is designed to pre-process the gradients.

__init__(
   scale_gradient=1.0,
   clip_gradient=-1.0,
   l2_decay=-1.0,
   slot=None,
   verbose=True
)

Construct a Updater to optimize the objectives.

Parameters:
  • scale_gradient (float) – The scale factor of gradients.
  • clip_gradient (float) – The clip factor of gradients.
  • l2_decay (float) – The l2 decay factor. Default is -1.0 (Disabled).
  • slot (str) – The slot name of advanced updater.
append(pair, lr_mult=1.0, decay_mult=1.0)

Append an UpdatePair into the updater.

Parameters:
  • pair (tuple or list) – The pair represent (values, grads).
  • lr_mult (float) – The learning rate multiplier.
  • decay_mult (float) – The decay factor multiplier.
Returns:

Return type:

None

class dragon.updaters.SGDUpdater(base_lr=0.01, momentum=0.9, **kwargs)

The Momentum-SGD Updater.

Introduced by [LeCun et.al, 1998].

__init__(base_lr=0.01, momentum=0.9, **kwargs)

Construct a Momentum-SGD Updater to optimize the objectives.

Parameters:
  • base_lr (float) – The base learning rate.
  • momentum (float) – The momentum.
class dragon.updaters.NesterovUpdater(base_lr=0.01, momentum=0.9, **kwargs)

The Nesterov-SGD Updater.

Introduced by [Sutskever et.al, 2012].

__init__(base_lr=0.01, momentum=0.9, **kwargs)

Construct a Nesterov-SGD Updater to optimize the objectives.

Parameters:
  • base_lr (float) – The base learning rate.
  • momentum (float) – The momentum.
class dragon.updaters.RMSPropUpdater(base_lr=0.01, decay=0.9, eps=1e-08, **kwargs)

The RMSProp Updater.

Introduced by [Hinton et.al, 2013].

__init__(base_lr=0.01, decay=0.9, eps=1e-08, **kwargs)

Construct a RMSProp Updater to optimize the objectives.

Parameters:
  • base_lr (float) – The base learning rate.
  • decay (float) – The decay.
  • eps (float) – The eps.
class dragon.updaters.AdamUpdater(
   base_lr=0.01,
   beta1=0.9,
   beta2=0.999,
   eps=1e-08,
   **kwargs
)

The Adam Updater.

Introduced by [Kingma & Ba, 2014].

__init__(
   base_lr=0.01,
   beta1=0.9,
   beta2=0.999,
   eps=1e-08,
   **kwargs
)

Construct a Adam Updater to optimize the objectives.

Parameters:
  • base_lr (float) – The base learning rate.
  • beta1 (float) – The beta1.
  • beta2 (float) – The beta2.
  • eps (float) – The eps.