tensorflow-gpu

Is there a way to use tensorflow map_fn on GPU?

回眸只為那壹抹淺笑 提交于 2019-11-29 08:12:58
问题 I have a tensor A with shape [a,n] and I need to perform an op my_op with another tensor B of shape [b,n] such that the resulting tensor C has shape [a,b]. In other words: For each subtensor in A (A[0], A1,...A[n]) I need to perform an element wise op with each subtensor in B . So the resulting tensor would contain the following: [ [ A[0] op B[0] , A[0] op B[1], ... , A[0] op B[b] ], [ A[1] op B[0] , A[1] op B[1], ... , A[1] op B[b] ], [ ... ], [ A[a] op B[0] , A[a] op B[1], ... , A[a] op B[b

tensorflow Mac OS gpu support

牧云@^-^@ 提交于 2019-11-28 15:32:23
问题 According to https://www.tensorflow.org/install/install_mac Note: As of version 1.2, TensorFlow no longer provides GPU support on Mac OS X. GPU support for OS X is no longer provided. However, I would want to run an e-gpu setup like akitio node with a 1080 ti via thunderbolt 3. What steps are required to get this setup to work? So far I know that disable SIP run automate e-gpu script https://github.com/goalque/automate-eGPU are required. What else is needed to get CUDA / tensorflow to work?

How to set specific gpu in tensorflow?

ε祈祈猫儿з 提交于 2019-11-28 05:52:34
I want to specify the gpu to run my process. And I set it as follows: import tensorflow as tf with tf.device('/gpu:0'): a = tf.constant(3.0) with tf.Session() as sess: while True: print sess.run(a) However it still allocate memory in both my two gpus. | 0 7479 C python 5437MiB | 1 7479 C python 5437MiB I believe that you need to set CUDA_VISIBLE_DEVICES=1 . Or which ever GPU you want to use. If you make only one GPU visible, you will refer to it as /gpu:0 in tensorflow regardless of what you set the environment variable to. More info on that environment variable: https://devblogs.nvidia.com

How does one move data to multiple GPU towers using Tensorflow's Dataset API

。_饼干妹妹 提交于 2019-11-27 17:28:20
We are running multi GPU jobs on Tensorflow and evaluating a migration from the queue based model (using the string_input_producer interface) to the new Tensorflow Dataset API. The latter appears to offer an easier way to switch between Train and Validation, concurrently. A snippet of code below shows how we are doing this. train_dataset, train_iterator = get_dataset(train_files, batch_size, epochs) val_dataset, val_iterator = get_dataset(val_files, batch_size, epochs) is_validating = tf.placeholder(dtype=bool, shape=()) next_batch = tf.cond(is_validating, lambda: val_iterator.get_next(),

How to use TensorFlow metrics in Keras

爷,独闯天下 提交于 2019-11-27 14:46:39
There seem to be several threads/issues on this already but it doesn't appear to me that this has been solved: How can I use tensorflow metric function within keras models? https://github.com/fchollet/keras/issues/6050 https://github.com/fchollet/keras/issues/3230 People seem to either run into problems around variable initialization or the metric being 0. I need to calculate different segmentation metrics and would like to include tf.metric.mean_iou in my Keras model. This is the best I have been able to come up with so far: def mean_iou(y_true, y_pred): score, up_opt = tf.metrics.mean_iou(y

Keras with TensorFlow backend not using GPU

拜拜、爱过 提交于 2019-11-27 05:06:42
I built the gpu version of the docker image https://github.com/floydhub/dl-docker with keras version 2.0.0 and tensorflow version 0.12.1. I then ran the mnist tutorial https://github.com/fchollet/keras/blob/master/examples/mnist_cnn.py but realized that keras is not using GPU. Below is the output that I have root@b79b8a57fb1f:~/sharedfolder# python test.py Using TensorFlow backend. Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz x_train shape: (60000, 28, 28, 1) 60000 train samples 10000 test samples Train on 60000 samples, validate on 10000 samples Epoch 1/12 2017-09-06

How to set specific gpu in tensorflow?

谁说胖子不能爱 提交于 2019-11-26 22:41:37
问题 I want to specify the gpu to run my process. And I set it as follows: import tensorflow as tf with tf.device('/gpu:0'): a = tf.constant(3.0) with tf.Session() as sess: while True: print sess.run(a) However it still allocate memory in both my two gpus. | 0 7479 C python 5437MiB | 1 7479 C python 5437MiB 回答1: I believe that you need to set CUDA_VISIBLE_DEVICES=1 . Or which ever GPU you want to use. If you make only one GPU visible, you will refer to it as /gpu:0 in tensorflow regardless of what

How does one move data to multiple GPU towers using Tensorflow's Dataset API

社会主义新天地 提交于 2019-11-26 22:33:44
问题 We are running multi GPU jobs on Tensorflow and evaluating a migration from the queue based model (using the string_input_producer interface) to the new Tensorflow Dataset API. The latter appears to offer an easier way to switch between Train and Validation, concurrently. A snippet of code below shows how we are doing this. train_dataset, train_iterator = get_dataset(train_files, batch_size, epochs) val_dataset, val_iterator = get_dataset(val_files, batch_size, epochs) is_validating = tf

How to use TensorFlow metrics in Keras

 ̄綄美尐妖づ 提交于 2019-11-26 16:54:07
问题 There seem to be several threads/issues on this already but it doesn't appear to me that this has been solved: How can I use tensorflow metric function within keras models? https://github.com/fchollet/keras/issues/6050 https://github.com/fchollet/keras/issues/3230 People seem to either run into problems around variable initialization or the metric being 0. I need to calculate different segmentation metrics and would like to include tf.metric.mean_iou in my Keras model. This is the best I have

Meaning of buffer_size in Dataset.map , Dataset.prefetch and Dataset.shuffle

心不动则不痛 提交于 2019-11-26 12:42:31
As per TensorFlow documentation , the prefetch and map methods of tf.contrib.data.Dataset class, both have a parameter called buffer_size . For prefetch method, the parameter is known as buffer_size and according to documentation : buffer_size: A tf.int64 scalar tf.Tensor, representing the maximum number elements that will be buffered when prefetching. For the map method, the parameter is known as output_buffer_size and according to documentation : output_buffer_size: (Optional.) A tf.int64 scalar tf.Tensor, representing the maximum number of processed elements that will be buffered. Similarly