tensorflow-gpu

Tensorflow GPU stopped working

一曲冷凌霜 提交于 2019-12-08 07:17:09
问题 Reproducing the issue I had tensorflow running a few days ago, but it stopped working. Upon testing it with the tutorial code, both mnist_softmax and mnist_deep fail. Tensorflow is succeeding in running the simple helloworld content. What I've tried As with delton137, I've tried setting allow_growth to True or the per_process_gpu_memory_fraction to 0.1, but this does not help. I've tried reinstalling my cudnn files. Additional notes I don't remember making any changes to my Tensorflow

Weights and Biases not updating in tensorflow

岁酱吖の 提交于 2019-12-08 04:34:45
I've made this neural net to figure out whether a house is a good buy or a bad buy. For some reasons the code is not updating weights and biases. My loss stays same. This is my code: I've made this neural net to figure out whether a house is a good buy or a bad buy. For some reasons the code is not updating weights and biases. My loss stays same. This is my code: import pandas as pd import tensorflow as tf data = pd.read_csv("E:/workspace_py/datasets/good_bad_buy.csv") features = data.drop(['index', 'good buy'], axis = 1) lbls = data.drop(['index', 'area', 'bathrooms', 'price', 'sq_price'],

Tensorflow won't build with CUDA support

為{幸葍}努か 提交于 2019-12-07 22:59:57
问题 I've tried building tensorflow from source as described in the installation guide. I've had success building it with cpu-only support and with the SIMD instruction sets, but I've run into trouble trying to build with CUDA support. System information: Mint 18 Sarah 4.4.0-21-generic gcc 5.4.0 clang 3.8.0 Python 3.6.1 Nvidia GeForece GTX 1060 6GB (Compute capability 6.1) CUDA 8.0.61 CuDNN 6.0 Here's my attempt at building with CUDA, gcc, and SIMD: kevin@yeti-mint ~/src/tensorflow $ bazel clean

Keras tensorflow backend does not detect GPU

左心房为你撑大大i 提交于 2019-12-07 14:18:25
问题 I am running keras with tensorflow backend on linux. First, I installed tensorflow GPU version by itself, and run the following code to check and found out that it's running on GPU and shows the GPU it's running on, device mapping, etc. The tensorflow I use was from https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow- 0.11.0-cp27-none-linux_x86_64.whl a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2]

Distributed Tensorflow: check failed: size>=0

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-07 05:11:09
问题 I'm using keras 2.0.6. The version of tensorflow is 1.3.0. My code can run with theano backend, but failed with tensorflow backend: F tensorflow/core/framework/tensor_shape.cc:241] Check failed: size >= 0 (-14428307456 vs. 0) I was wondering if anyone can thought of any possible reason that might cause this. Thank you! ----UPDATE----- I tested exactly the same code on my PC with tensorflow. It runs perfectly. However, it throw out this error when I run it on a Supercomputer. Although this

Anaconda Prompt Corrupts after Installation

风格不统一 提交于 2019-12-07 03:32:20
问题 I just installed Tensorflow-gpu after creating a separate environment following the instructions from here. However post installation when I close the Prompt window and open a new terminal the following error pops up. I have set the Anaconda/Scripts and Anaconda path to the Environment Variable too still this doesn't seem to get resolved. Any solution is appreciated. usage: conda [-h] {keygen,sign,unsign,verify,unpack,install,install- scripts,convert,version,help} ... conda: error: invalid

Keras Model With CuDNNLSTM Layers Doesn't Work on Production Server

牧云@^-^@ 提交于 2019-12-06 05:03:36
I have used an AWS p3 instance to train the following model using GPU acceleration: x = CuDNNLSTM(128, return_sequences=True)(inputs) x = Dropout(0.2)(x) x = CuDNNLSTM(128, return_sequences=False)(x) x = Dropout(0.2)(x) predictions = Dense(1, activation='tanh')(x) model = Model(inputs=inputs, outputs=predictions) After training I saved the model with Keras' save_model function and moved it to a separate production server that doesn't have a GPU. When I attempt to predict using the model on the production server it fails with the following error: No OpKernel was registered to support Op

Keras tensorflow backend does not detect GPU

戏子无情 提交于 2019-12-06 03:28:23
I am running keras with tensorflow backend on linux. First, I installed tensorflow GPU version by itself, and run the following code to check and found out that it's running on GPU and shows the GPU it's running on, device mapping, etc. The tensorflow I use was from https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow- 0.11.0-cp27-none-linux_x86_64.whl a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tf.matmul(a, b) # Creates a session with log_device_placement set to True. sess = tf

Is there anyway to use tensorflow-gpu with intel(r) hd graphics 520?

末鹿安然 提交于 2019-12-05 03:49:49
I am working on my master's project which uses keras and tensorflow backend .I have intel(r) hd graphics 520 ,So I am not able to use tensorflow-gpu. The cpu version is working fine .Is there any way to use tensorflow-gpu with the intel(r) hd graphics 520? Tensorflow GPU support needs Nvidia Cuda and CuDNN packages installed. For GPU accelerated training you will need a dedicated GPU . Intel onboard graphics can't be used for that purpose. You can see full requirements for tensorflow-gpu here 来源: https://stackoverflow.com/questions/47399802/is-there-anyway-to-use-tensorflow-gpu-with-intelr-hd

GRPC causes training to pause in individual worker (distributed tensorflow, synchronised)

陌路散爱 提交于 2019-12-04 22:48:58
I am trying to train model in synchronous distributed fashion for data parallelism. There are 4 gpus in my machine. Each gpu should should run a worker to train on separate non-overlapping subset of the data (between graph replication). The main data file is separated into 16 smaller TFRecord files. Each worker is supposed to process 4 different files. The problem is that training freezes independently and at different times in each worker process. They freeze at some point. One of the 'ps' reports following error related to grpc: 2017-09-21 16:45:55.606842: I tensorflow/core/distributed