What is the difference of static Computational Graphs in tensorflow and dynamic Computational Graphs in Pytorch?

前端 未结 3 1132
温柔的废话
温柔的废话 2021-01-31 03:48

When I was learning tensorflow, one basic concept of tensorflow was computational graphs, and the graphs was said to be static. And I found in Pytorch, the graphs was said to be

3条回答
  •  有刺的猬
    2021-01-31 04:22

    Both TensorFlow and PyTorch allow specifying new computations at any point in time. However, TensorFlow has a "compilation" steps which incurs performance penalty every time you modify the graph. So TensorFlow optimal performance is achieved when you specify the computation once, and then flow new data through the same sequence of computations.

    It's similar to interpreters vs. compilers -- the compilation step makes things faster, but also discourages people from modifying the program too often.

    To make things concrete, when you modify the graph in TensorFlow (by appending new computations using regular API, or removing some computation using tf.contrib.graph_editor), this line is triggered in session.py. It will serialize the graph, and then the underlying runtime will rerun some optimizations which can take extra time, perhaps 200usec. In contrast, running an op in previously defined graph, or in numpy/PyTorch can be as low as 1 usec.

提交回复
热议问题