Backpropagating through multiple forward passes

旧巷老猫 提交于 2020-08-05 09:39:38

问题


In usual backprop, we forward-prop once, compute gradients, then apply them to update weights. But suppose we wish to forward-prop twice, and backprop through both, and apply gradients only then (skip on first).

Suppose the following:

x = tf.Variable([2.])
w = tf.Variable([4.])

with tf.GradientTape(persistent=True) as tape:
    w.assign(w * x)
    y = w * w  # w^2 * x
print(tape.gradient(y, x))  # >>None

From docs, a tf.Variable is a stateful object, which blocks gradients, and weights are tf.Variables.

Examples are differentiable hard attention (as opposed to RL), or simply passing a hidden state between layers in subsequent forward passes, as in diagram below. Neither TF nor Keras have an API-level support for stateful gradients, including RNNs, which only keep a stateful state tensor; gradient does not flow beyond one batch.

How can this be accomplished?


回答1:


We'll need to elaborately apply tf.while_loop; from help(TensorArray):

This class is meant to be used with dynamic iteration primitives such as while_loop and map_fn. It supports gradient back-propagation via special "flow" control flow dependencies.

We thus seek to write a loop such that all outputs we are to backpropagate through are written to a TensorArray. Code accomplishing this, and its high-level description, below. At bottom is a validating example.


Description:

  • Code borrows from K.rnn, rewritten for simplicity and relevance
  • For better understanding, I suggest inspecting K.rnn, SimpleRNNCell.call, and RNN.call.
  • model_rnn has a few needless checks for sake of case 3; will link cleaner version
  • The idea's as follows: we traverse the network first bottom-to-top, then left-to-right, and write the entire forwar pass to a single TensorArray under a single tf.while_loop; this ensures TF caches tensor ops throughout for backpropagation.

from tensorflow.python.util import nest
from tensorflow.python.ops import array_ops, tensor_array_ops
from tensorflow.python.framework import ops


def model_rnn(model, inputs, states=None, swap_batch_timestep=True):
    def step_function(inputs, states):
        out = model([inputs, *states], training=True)
        output, new_states = (out if isinstance(out, (tuple, list)) else
                              (out, states))
        return output, new_states

    def _swap_batch_timestep(input_t):
        # (samples, timesteps, channels) -> (timesteps, samples, channels)
        # iterating dim0 to feed (samples, channels) slices expected by RNN
        axes = list(range(len(input_t.shape)))
        axes[0], axes[1] = 1, 0
        return array_ops.transpose(input_t, axes)

    if swap_batch_timestep:
        inputs = nest.map_structure(_swap_batch_timestep, inputs)

    if states is None:
        states = (tf.zeros(model.inputs[0].shape, dtype='float32'),)
    initial_states = states
    input_ta, output_ta, time, time_steps_t = _process_args(model, inputs)

    def _step(time, output_ta_t, *states):
        current_input = input_ta.read(time)
        output, new_states = step_function(current_input, tuple(states))

        flat_state = nest.flatten(states)
        flat_new_state = nest.flatten(new_states)
        for state, new_state in zip(flat_state, flat_new_state):
            if isinstance(new_state, ops.Tensor):
                new_state.set_shape(state.shape)

        output_ta_t = output_ta_t.write(time, output)
        new_states = nest.pack_sequence_as(initial_states, flat_new_state)
        return (time + 1, output_ta_t) + tuple(new_states)

    final_outputs = tf.while_loop(
        body=_step,
        loop_vars=(time, output_ta) + tuple(initial_states),
        cond=lambda time, *_: tf.math.less(time, time_steps_t))

    new_states = final_outputs[2:]
    output_ta = final_outputs[1]
    outputs = output_ta.stack()
    return outputs, new_states


def _process_args(model, inputs):
    time_steps_t = tf.constant(inputs.shape[0], dtype='int32')

    # assume single-input network (excluding states)
    input_ta = tensor_array_ops.TensorArray(
        dtype=inputs.dtype,
        size=time_steps_t,
        tensor_array_name='input_ta_0').unstack(inputs)

    # assume single-input network (excluding states)
    # if having states, infer info from non-state nodes
    output_ta = tensor_array_ops.TensorArray(
        dtype=model.outputs[0].dtype,
        size=time_steps_t,
        element_shape=model.outputs[0].shape,
        tensor_array_name='output_ta_0')

    time = tf.constant(0, dtype='int32', name='time')
    return input_ta, output_ta, time, time_steps_t

Examples & validating:

Case design: we feed the same input twice, which enables certain stateful vs stateless comparisons; results also hold for differing inputs.

  • Case 0: control; other cases must match this.
  • Case 1: fail; gradients don't match, even though outputs and loss do. Backprop fails when feeding the halved sequence.
  • Case 2: gradients match case 1. It may seem we've used only one tf.while_loop, but SimpleRNN uses one of its own for the 3 timesteps, and writes to a TensorArray that's discarded; this won't do. A workaround is to implement the SimpleRNN logic ourselves.
  • Case 3: perfect match.

Note that there's no such thing as a stateful RNN cell; statefulness is implemented in the RNN base class, and we've recreated it in model_rnn. This is likewise how any other layer is to be handled - feeding one step slice at a time for every forward pass.

import random
import numpy as np
import tensorflow as tf

from tensorflow.keras.layers import Input, SimpleRNN, SimpleRNNCell
from tensorflow.keras.models import Model

def reset_seeds():
    random.seed(0)
    np.random.seed(1)
    tf.compat.v1.set_random_seed(2)  # graph-level seed
    tf.random.set_seed(3)  # global seed

def print_report(case, model, outs, loss, tape, idx=1):
    print("\nCASE #%s" % case)
    print("LOSS", loss)
    print("GRADS:\n", tape.gradient(loss, model.layers[idx].weights[0]))
    print("OUTS:\n", outs)


#%%# Make data ###############################################################
reset_seeds()
x0 = y0 = tf.constant(np.random.randn(2, 3, 4))
x0_2 = y0_2 = tf.concat([x0, x0], axis=1)
x00  = y00  = tf.stack([x0, x0], axis=0)

#%%# Case 0: Complete forward pass; control case #############################
reset_seeds()
ipt = Input(batch_shape=(2, 6, 4))
out = SimpleRNN(4, return_sequences=True)(ipt)
model0 = Model(ipt, out)
model0.compile('sgd', 'mse')
#%%#############################################################
with tf.GradientTape(persistent=True) as tape:
    outs = model0(x0_2, training=True)
    loss = model0.compiled_loss(y0_2, outs)
print_report(0, model0, outs, loss, tape)

#%%# Case 1: Two passes, stateful RNN, direct feeding ########################
reset_seeds()
ipt = Input(batch_shape=(2, 3, 4))
out = SimpleRNN(4, return_sequences=True, stateful=True)(ipt)
model1 = Model(ipt, out)
model1.compile('sgd', 'mse')
#%%#############################################################
with tf.GradientTape(persistent=True) as tape:
    outs0 = model1(x0, training=True)
    tape.watch(outs0)  # cannot even diff otherwise
    outs1 = model1(x0, training=True)
    tape.watch(outs1)
    outs = tf.concat([outs0, outs1], axis=1)
    tape.watch(outs)
    loss = model1.compiled_loss(y0_2, outs)
print_report(1, model1, outs, loss, tape)

#%%# Case 2: Two passes, stateful RNN, model_rnn #############################
reset_seeds()
ipt = Input(batch_shape=(2, 3, 4))
out = SimpleRNN(4, return_sequences=True, stateful=True)(ipt)
model2 = Model(ipt, out)
model2.compile('sgd', 'mse')
#%%#############################################################
with tf.GradientTape(persistent=True) as tape:
    outs, _ = model_rnn(model2, x00, swap_batch_timestep=False)
    outs = tf.concat(list(outs), axis=1)
    loss = model2.compiled_loss(y0_2, outs)
print_report(2, model2, outs, loss, tape)

#%%# Case 3: Single pass, stateless RNN, model_rnn ###########################
reset_seeds()
ipt  = Input(batch_shape=(2, 4))
sipt = Input(batch_shape=(2, 4))
out, state = SimpleRNNCell(4)(ipt, sipt)
model3 = Model([ipt, sipt], [out, state])
model3.compile('sgd', 'mse')
#%%#############################################################
with tf.GradientTape(persistent=True) as tape:
    outs, _ = model_rnn(model3, x0_2)
    outs = tf.transpose(outs, (1, 0, 2))
    loss = model3.compiled_loss(y0_2, outs)
print_report(3, model3, outs, loss, tape, idx=2)

Vertical flow: we've validated horizontal, timewise-backpropagation; what about vertical?

To this end, we implement a stacked stateful RNN; results below. All outputs on my machine, here.

We've hereby validated both vertical and horizontal stateful backpropagation. This can be used to implement arbitrarily complex forward-prop logic with correct backprop. Applied example here.

#%%# Case 4: Complete forward pass; control case ############################
reset_seeds()
ipt = Input(batch_shape=(2, 6, 4))
x   = SimpleRNN(4, return_sequences=True)(ipt)
out = SimpleRNN(4, return_sequences=True)(x)
model4 = Model(ipt, out)
model4.compile('sgd', 'mse')
#%%
with tf.GradientTape(persistent=True) as tape:
    outs = model4(x0_2, training=True)
    loss = model4.compiled_loss(y0_2, outs)
print("=" * 80)
print_report(4, model4, outs, loss, tape, idx=1)
print_report(4, model4, outs, loss, tape, idx=2)

#%%# Case 5: Two passes, stateless RNN; model_rnn ############################
reset_seeds()
ipt = Input(batch_shape=(2, 6, 4))
out = SimpleRNN(4, return_sequences=True)(ipt)
model5a = Model(ipt, out)
model5a.compile('sgd', 'mse')

ipt  = Input(batch_shape=(2, 4))
sipt = Input(batch_shape=(2, 4))
out, state = SimpleRNNCell(4)(ipt, sipt)
model5b = Model([ipt, sipt], [out, state])
model5b.compile('sgd', 'mse')
#%%
with tf.GradientTape(persistent=True) as tape:
    outs = model5a(x0_2, training=True)
    outs, _ = model_rnn(model5b, outs)
    outs = tf.transpose(outs, (1, 0, 2))
    loss = model5a.compiled_loss(y0_2, outs)
print_report(5, model5a, outs, loss, tape)
print_report(5, model5b, outs, loss, tape, idx=2)


来源:https://stackoverflow.com/questions/63222770/backpropagating-through-multiple-forward-passes

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!