Does “tf.config.experimental.set_synchronous_execution” make the Python tensorflow lite interpreter use multiprocessing?

蓝咒 提交于 2020-04-18 05:47:39

问题


I am using Python to do object detection in a video stream. I have a TensorFlow Lite model which takes a relatively long time to evaluate. Using interpreter.invoke(), it takes about 500 ms per evaluation. I'd like to use parallelism to get more evaluations per second.

I see that I can call the TensorFlow config tf.config.experimental.set_synchronous_execution. I was hoping that setting this would magically cause the interpreter to run in multiple processes.

However, running help(tf.lite.Interpreter) states that:

  invoke(self)
     Invoke the interpreter.

     Be sure to set the input sizes, allocate tensors and fill values before
     calling this. Also, note that this function releases the GIL so heavy
     computation can be done in the background while the Python interpreter
     continues. No other function on this object should be called while the
     invoke() call has not finished.

Do I need to write my own multiprocessing code, similar to speedup TFLite inference in python with multiprocessing pool?

来源:https://stackoverflow.com/questions/61263640/does-tf-config-experimental-set-synchronous-execution-make-the-python-tensorfl

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!