gradients

dragon.vm.tensorflow.gradients(
  ys,
  xs,
  grad_ys=None,
  **kwargs
)[source]

Compute the symbolic derivatives of ys w.r.t. xs .

By default, we will fill the gradient of ys with ones:

x = tf.ones(shape=(1,))
y = x * 2
dx = tf.gradients(y, x)  # [2,]

You can set grad_ys to use an existing constant:

dy = tf.constant([2], dtype=tf.float32)
dx = tf.gradients(y, x, dy)  # [4,]

Do not call this method under eager execution:

# Wrong usage
with dragon.eager_mode():
     x = tf.ones(shape=(1,))
     y = x * 2
     dx = tf.gradients(y, x)

# Correct usage
with dragon.eager_mode():
    x = tf.ones(shape=(1,))
    with tf.GradientTape() as tape:
        y = x * 2
    dx = tape.gradient(y, x)
Parameters:
  • ys (Sequence[dragon.Tensor]) – The target of derivatives.
  • xs (Sequence[dragon.Tensor]) – The source with respect to the ys.
  • grad_ys (Sequence[dragon.Tensor], optional) – The input gradient for ys.
Returns:

Sequence[dragon.Tensor] – The sum of derivatives for xs.