dragon.memonger

Quick Reference

List Brief
ShareGrads Enable gradients sharing globally.
Drop Drop(Share) the inputs for outputs.

API Reference

A simple wrapper for memory optimization tricks.

dragon.memonger.ShareGrads(enabled=True)

Enable gradients sharing globally.

Parameters:enabled (boolean) – Whether to share grads.
Returns:
Return type:None

Examples

>>> import dragon.memonger as opt
>>> opt.ShareGrads()
dragon.memonger.IsGradsShared()

Is grads are shared?

Returns:True if sharing grads else False.
Return type:boolean
dragon.memonger.Drop(op_func, *args, **kwargs)

Drop(Share) the inputs for outputs.

Parameters:op_func (lambda) – The function of any operators.
Returns:The outputs of the given operator.
Return type:dragon.Tensor or list[dragon.Tensor]

Examples

>>> import dragon as dg
>>> import dragon.memonger as opt
>>> data = dg.Tensor().Variable()
>>> conv_1 = dg.Conv2d(data, num_output=8)
>>> conv_1_bn = opt.Drop(dg.BatchNorm, [conv_1, dg.Tensor().Variable(), dg.Tensor.Variable()])
>>> conv_1_relu = opt.Drop(dg.Relu, conv_1_bn)