eager-execution

Parallelizing model predictions in keras using multiprocessing for python

为君一笑 提交于 2020-12-29 07:47:43
问题 I'm trying to perform model predictions in parallel using the model.predict command provided by keras in python2. I use tensorflow 1.14.0 for python2. I have 5 model (.h5) files and would like the predict command to run in parallel.This is being run in python 2.7. I'm using multiprocessing pool for mapping the model filenames with the prediction function on multiple processes as shown below, import matplotlib as plt import numpy as np import cv2 from multiprocessing import Pool pool=Pool()

Parallelizing model predictions in keras using multiprocessing for python

老子叫甜甜 提交于 2020-12-29 07:47:05
问题 I'm trying to perform model predictions in parallel using the model.predict command provided by keras in python2. I use tensorflow 1.14.0 for python2. I have 5 model (.h5) files and would like the predict command to run in parallel.This is being run in python 2.7. I'm using multiprocessing pool for mapping the model filenames with the prediction function on multiple processes as shown below, import matplotlib as plt import numpy as np import cv2 from multiprocessing import Pool pool=Pool()

Inputs to eager execution function cannot be Keras symbolic tensors

百般思念 提交于 2020-07-17 10:12:36
问题 I am trying to implement sample- and pixel-dependent dependent loss weighting in tf.Keras (TensorFlow 2.0.0rc0) for a 3-D U-Net with sparse annotation data (Cicek 2016, arxiv:1606.06650). This is my code: import numpy as np import tensorflow as tf from tensorflow.keras import layers, losses, models # disabling eager execution makes this example work: # tf.python.framework_ops.disable_eager_execution() def get_loss_fcn(w): def loss_fcn(y_true, y_pred): loss = w * losses.mse(y_true, y_pred)

Understanding device allocation, parallelism(tf.while_loop) and tf.function in tensorflow

最后都变了- 提交于 2020-05-16 06:45:44
问题 I'm trying to understand parallelism on GPU in tensorflow as I need to apply it on uglier graphs. import tensorflow as tf from datetime import datetime with tf.device('/device:GPU:0'): var = tf.Variable(tf.ones([100000], dtype=tf.dtypes.float32), dtype=tf.dtypes.float32) @tf.function def foo(): return tf.while_loop(c, b, [i], parallel_iterations=1000) #tweak @tf.function def b(i): var.assign(tf.tensor_scatter_nd_update(var, tf.reshape(i, [-1,1]), tf.constant([0], dtype=tf.dtypes.float32)))

InvalidArgumentError: cannot compute MatMul as input #0(zero-based) was expected to be a float tensor but is a double tensor [Op:MatMul]

独自空忆成欢 提交于 2020-02-02 01:26:15
问题 Can somebody explain, how does TensorFlow's eager mode work? I am trying to build a simple regression as follows: import tensorflow as tf tfe = tf.contrib.eager tf.enable_eager_execution() import numpy as np def make_model(): net = tf.keras.Sequential() net.add(tf.keras.layers.Dense(4, activation='relu')) net.add(tf.keras.layers.Dense(1)) return net def compute_loss(pred, actual): return tf.reduce_mean(tf.square(tf.subtract(pred, actual))) def compute_gradient(model, pred, actual): """compute

Create tf.keras callback to save model predictions and targets for each batch during training in tf 2.0

笑着哭i 提交于 2020-01-15 06:18:09
问题 In tensorflow 2 fetches and assign is not any more supported. Accessing batch results in tf 1.x in a custom keras callback is possible following the answer provided in https://stackoverflow.com/a/47081613/9949099 In tf.keras and tf 2.0 under eager execution fetches are not supported, therefore the solution provided for tf 1.x is not working. Is there a way to get the y_true and y_pred inside the on_batch_end callback of a tf.keras custom callback? I have tried to modify the answer working in

How to combine multiple datasets into one dataset?

断了今生、忘了曾经 提交于 2019-12-24 11:22:31
问题 Suppose I have 3 tfrecord files, namely neg.tfrecord , pos1.tfrecord , pos2.tfrecord . I use dataset = tf.data.TFRecordDataset(tfrecord_file) this code creates 3 Dataset objects. My batch size is 400, including 200 neg data, 100 pos1 data, and 100 pos2 data. How can I get the desired dataset? I will use this dataset object in keras.fit() (Eager Execution). My tensorflow's version is 1.13.1. Before, I tried to get the iterator for each dataset, and then manually concat after getting the data,

Tensorflow cannot get gradient wrt a Variable, but can wrt a Tensor

五迷三道 提交于 2019-12-22 11:29:38
问题 I am interested in computing the gradient of a loss that is calculated from a product of a matrix multiplication in TensorFlow with Eager Execution. I can do so if the product is computed as a tensor, but not if it's assign() ed in place to a variable. Here is the greatly reduced code: import tensorflow as tf import numpy as np tf.enable_eager_execution() multipliers_net = tf.get_variable("multipliers", shape=(1, 3, 3, 1), initializer=tf.random_normal_initializer()) activations_net = tf

How can I profile graph functions in TensorFlow Eager?

Deadly 提交于 2019-12-11 05:14:23
问题 In TensorFlow Eager, I can use Python's profiler to profile code that operates purely in eager mode. However, if I "compile" a python function to a graph function using tf.function or tf.contrib.eager.defun , that function becomes opaque to python - the profiler cannot enter it. I have found out how to profile a TF graph in graph mode, but I don't know how to do it with a graph function in eager mode. Specifically, if I construct a function like this, tf.enable_v2_behavior() @tf.function def