Tensorflow - Is it possible to manually decide which tensors in a graph go to the GPU and CPU?

你离开我真会死。 提交于 2019-12-10 21:49:46

问题


I've created a network in Keras that uses both convolutional and LSTM layers.

And I've read that tensorflow can't handle well the LSTM layers in GPU: http://minimaxir.com/2017/07/cpu-or-gpu/

In fact, my network got slow in the GPU (hoping its not a further problem).

I want tensorflow to throw all convolutional operations to the GPU, but keep all the LSTM operations in the CPU. Is that possible?

So, considering I can find the tensorflow graph and identify each of its tensors, can I define manually where each tensor performs its operations?


回答1:


Following implementation of this network, you could (if you use Tensorflow) try the following technique:

with tf.device('/cpu:0'):
    # model layers which should be run on CPU go here



回答2:


Look at this document: Distributed TensorFlow Specifying distributed devices in your model



来源:https://stackoverflow.com/questions/46366216/tensorflow-is-it-possible-to-manually-decide-which-tensors-in-a-graph-go-to-th

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!