How to make best use of GPU for TensorFlow Estimators?

随声附和 提交于 2019-11-30 16:41:02

It sounds like you might be reading from csv and converting to a pandas DataFrame, and then using tensorflow's pandas_input_fn. This is a known issue with the implementation of pandas_input_fn. You can track the issue at https://github.com/tensorflow/tensorflow/issues/13530.

To combat this, you can use a different method for i/o (reading from TFRecords, for example). If you'd like to continue using pandas and increase your steps/second, you can reduce your batch_size, though this may have negative consequences on your estimator's ability to learn.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!