tensorflow-gpu

tensorflow GPU crashes for 0 batch size CUDNN_STATUS_BAD_PARAM

人盡茶涼 提交于 2019-12-02 11:04:50
问题 This issue seem to be existing for a long time and lots of users are facing the issue. stream_executor/cuda/cuda_dnn.cc:444] could not convert BatchDescriptor {count: 0 feature_map_count: 64 spatial: 7 264 value_min: 0.000000 value_max: 0.000000 layout: BatchDepthYX} t o cudnn tensor descriptor: CUDNN_STATUS_BAD_PARAM The message is so mysterious that I do not know what happened in my code, however, my code works fine on CPU tensorflow. I heard that we can use tf.cond to get around this, but

Tensorflow: using an input-pipeline (.csv) as a dictionary for training

ⅰ亾dé卋堺 提交于 2019-12-02 08:52:08
问题 I'm trying to train a model on a .csv dataset (5008 columns, 533 rows). I'm using a textreader to parse the data into two tensors, one holding the data to train on [example] and one holding the correct labels [label]: def read_my_file_format(filename_queue): reader = tf.TextLineReader() key, record_string = reader.read(filename_queue) record_defaults = [[0.5] for row in range(5008)] #Left out most of the columns for obvious reasons col1, col2, col3, ..., col5008 = tf.decode_csv(record_string,

Tensorflow: using an input-pipeline (.csv) as a dictionary for training

人盡茶涼 提交于 2019-12-02 04:34:34
I'm trying to train a model on a .csv dataset (5008 columns, 533 rows). I'm using a textreader to parse the data into two tensors, one holding the data to train on [example] and one holding the correct labels [label]: def read_my_file_format(filename_queue): reader = tf.TextLineReader() key, record_string = reader.read(filename_queue) record_defaults = [[0.5] for row in range(5008)] #Left out most of the columns for obvious reasons col1, col2, col3, ..., col5008 = tf.decode_csv(record_string, record_defaults=record_defaults) example = tf.stack([col1, col2, col3, ..., col5007]) label = col5008

tensorflow GPU crashes for 0 batch size CUDNN_STATUS_BAD_PARAM

你。 提交于 2019-12-02 03:31:56
This issue seem to be existing for a long time and lots of users are facing the issue. stream_executor/cuda/cuda_dnn.cc:444] could not convert BatchDescriptor {count: 0 feature_map_count: 64 spatial: 7 264 value_min: 0.000000 value_max: 0.000000 layout: BatchDepthYX} t o cudnn tensor descriptor: CUDNN_STATUS_BAD_PARAM The message is so mysterious that I do not know what happened in my code, however, my code works fine on CPU tensorflow. I heard that we can use tf.cond to get around this, but I'm new to tensorflow-gpu, so can someone please help me? My code uses Keras and takes generator like

Multi threading in Dataset api

你离开我真会死。 提交于 2019-12-02 02:06:46
TL;DR: how to ensure that data is loaded in multi threaded manner when using Dataset api in tensorflow 0.1.4? Previously I did something like this with my images in disk: filename_queue = tf.train.string_input_producer(filenames) image_reader = tf.WholeFileReader() _, image_file = image_reader.read(filename_queue) imsize = 120 image = tf.image.decode_jpeg(image_file, channels=3) image = tf.image.convert_image_dtype(image, dtype=tf.float32) image_r = tf.image.resize_images(image, [imsize, imsize]) images = tf.train.shuffle_batch([image_r], batch_size=20, num_threads=30, capacity=200, min_after

python : cannot import tensorflow-gpu

ε祈祈猫儿з 提交于 2019-11-30 23:36:02
I successfully created my tensorflow environment with Anaconda3 on my machine with the steps introduced on this link . But when I try to try to do : >>> import tensorflow as tf I get the following error messages : OSError and ImportError. Traceback (most recent call last): File "C:\Users\Froilan\Anaconda3\envs\tensorflow\lib\site- packages\tensorflow\python\platform\self_check.py", line 75, in preload_check ctypes.WinDLL(build_info.cudart_dll_name) File "C:\Users\Froilan\Anaconda3\envs\tensorflow\lib\ctypes\__init__.py", line 351, in __init__ self._handle = _dlopen(self._name, mode) OSError:

How to make best use of GPU for TensorFlow Estimators?

随声附和 提交于 2019-11-30 16:41:02
I was using Tensorflow(CPU version) for my Deep Learning Model. Specifically using DNNRegressor Estimator for training, with given set of parameters (network structure, hidden layers, alpha etc.) Though I was able to reduce the loss, but model took very large time for learning (approx 3 days.) and time it was taking was 9 sec per 100th step. I came accross this article :- https://medium.com/towards-data-science/how-to-traine-tensorflow-models-79426dabd304 and found that GPU's can be more faster to learn. So i took p2.xlarge gpu from AWS (single core GPU) with 4(vCPU), 12(ECU) and 61 (MiB). But

How to retrieve float_val from a PredictResponse object?

旧巷老猫 提交于 2019-11-30 13:38:20
I am running a prediction on a tensorflow-serving model, and I get back this PredictResponse object as output: Result: outputs { key: "outputs" value { dtype: DT_FLOAT tensor_shape { dim { size: 1 } dim { size: 20 } } float_val: 0.000343723397236 float_val: 0.999655127525 float_val: 3.96821117632e-11 float_val: 1.20521548297e-09 float_val: 2.09611101809e-08 float_val: 1.46216549979e-09 float_val: 3.87274603497e-08 float_val: 1.83520256769e-08 float_val: 1.47733780764e-08 float_val: 8.00914179422e-08 float_val: 2.29388191997e-07 float_val: 6.27798826258e-08 float_val: 1.08802950649e-07 float

Is there a way to use tensorflow map_fn on GPU?

a 夏天 提交于 2019-11-30 06:46:31
I have a tensor A with shape [a,n] and I need to perform an op my_op with another tensor B of shape [b,n] such that the resulting tensor C has shape [a,b]. In other words: For each subtensor in A (A[0], A 1 ,...A[n]) I need to perform an element wise op with each subtensor in B . So the resulting tensor would contain the following: [ [ A[0] op B[0] , A[0] op B[1], ... , A[0] op B[b] ], [ A[1] op B[0] , A[1] op B[1], ... , A[1] op B[b] ], [ ... ], [ A[a] op B[0] , A[a] op B[1], ... , A[a] op B[b] ] ] The only way that I've been able to find that achieves this is through nested use of tf.map_fn

tensorflow Mac OS gpu support

故事扮演 提交于 2019-11-29 19:37:54
According to https://www.tensorflow.org/install/install_mac Note: As of version 1.2, TensorFlow no longer provides GPU support on Mac OS X. GPU support for OS X is no longer provided. However, I would want to run an e-gpu setup like akitio node with a 1080 ti via thunderbolt 3. What steps are required to get this setup to work? So far I know that disable SIP run automate e-gpu script https://github.com/goalque/automate-eGPU are required. What else is needed to get CUDA / tensorflow to work? I wrote a little tutorial on compiling TensorFlow 1.2 with GPU support on macOS . I think it's customary