gradienttape

Out of memory OOM using tensorflow gradient tape but only happens when I append a list

你。 提交于 2021-01-28 03:21:57
问题 I've been working on a data set (1000,3253) using a CNN. I'm running gradient calculations through gradient tape but it keeps running out of memory. Yet if I remove the line appending a gradient calculation to a list the script runs through all the epochs. I'm not entirely sure why this would happen but I am also new to tensorflow and the use of gradient tape. Any advice or input would be appreciated #create a batch loop for x, y_true in train_dataset: #create a tape to record actions with tf

Why is Tensorflow's Gradient Tape returning None when trying to find the gradient of loss wrt input?

拜拜、爱过 提交于 2021-01-01 09:28:50
问题 I have a CNN model built in keras which uses an SVM in its last layer. I get the prediction of this SVM by putting in an input into the CNN model, extracting the relevant features and then putting those features into my SVM to get an output prediction. This entire process I have names predict_DNR_tensor in the code below. This works fine and I am able to get a correct prediction. I am now trying to get a gradient of squared hinge loss of this prediction from my SVM wrt to the original input,

How to use Tensorflow BatchNormalization with GradientTape?

无人久伴 提交于 2019-12-21 21:36:55
问题 Suppose we have a simple Keras model that uses BatchNormalization: model = tf.keras.Sequential([ tf.keras.layers.InputLayer(input_shape=(1,)), tf.keras.layers.BatchNormalization() ]) How to actually use it with GradientTape? The following doesn't seem to work as it doesn't update the moving averages? # model training... we want the output values to be close to 150 for i in range(1000): x = np.random.randint(100, 110, 10).astype(np.float32) with tf.GradientTape() as tape: y = model(np.expand