I installed tensorflow 1.0.1 GPU version on my Macbook Pro with GeForce GT 750M. Also installed CUDA 8.0.71 and cuDNN 5.1. I am running a tf code that works fine with non C
Adding following code worked for me:
config = tf.ConfigProto() config.gpu_options.allow_growth = True sess = tf.Session(config=config)
In my env there is no mismatch between CuDNN and Cuda versions. OS: ubuntu-18.04; Tensorflow: 1.14; CuDNN: 7.6; cuda: 10.1 (418.87.00).