问题
While trying to copy the weights of a LSTM Cell in Tensorflow using the Basic LSTM Cell as documented here, i stumbled upon both the trainable_weights and trainable_variables property.
Source code has not really been informative for a noob like me sadly. A little bit of experimenting did yield the following information though: Both have the exact same layout, being a list of length two, where the first entry is a tf.Variable of shape: (2*num_units, 4*num_units), the second entry of the list is of shape (4*num_units,), where num_units is the num_units from initializing the BasicLSTMCell. The intuitive guess for me is now, that the first list item is a concatenation of the weights of the four internal layers of the lstm, the second item being a concatenation of the respective biases, fitting the expected sizes of these obviously.
Now the question is, whether there is actually any difference between these? I assume they might just be a result of inheriting these from the rnn_cell class?
回答1:
From the source code of the Layer
class that RNNCell
inherits from:
@property
def trainable_variables(self):
return self.trainable_weights
See here. The RNN classes don't seem to overwrite this definition -- I would assume it's there for special layer types that have trainable variables that don't quite qualify as "weights". Batch normalization would come to mind, but unfortunately I can't find any mention of trainable_variables
in that one's source code (except for GraphKeys.TRAINABLE_VARIABLES
which is different).
来源:https://stackoverflow.com/questions/49020732/what-is-the-difference-between-the-trainable-weights-and-trainable-variables-in