问题
I've created a network in Keras that uses both convolutional and LSTM layers.
And I've read that tensorflow can't handle well the LSTM layers in GPU: http://minimaxir.com/2017/07/cpu-or-gpu/
In fact, my network got slow in the GPU (hoping its not a further problem).
I want tensorflow to throw all convolutional operations to the GPU, but keep all the LSTM operations in the CPU. Is that possible?
So, considering I can find the tensorflow graph and identify each of its tensors, can I define manually where each tensor performs its operations?
回答1:
Following implementation of this network, you could (if you use Tensorflow
) try the following technique:
with tf.device('/cpu:0'):
# model layers which should be run on CPU go here
回答2:
Look at this document: Distributed TensorFlow Specifying distributed devices in your model
来源:https://stackoverflow.com/questions/46366216/tensorflow-is-it-possible-to-manually-decide-which-tensors-in-a-graph-go-to-th