When we want to use distributed TensorFlow, we will create a parameter server using
tf.train.Server.join()
However, I can\'t find any way to sh
There's currently no clean way to shut down a TensorFlow gRPC server. It is possible to shut down a gRPC server, but doing it safely requires additional memory management for all of the in-flight request and response buffers, which would require a lot of additional plumbing (of the worst kind: asynchronous shared memory management...) for a feature that nobody had requested—until now!
In practice you should be able to use the same tf.train.Server
object for many different computations. If this doesn't work for your use case, please feel free to open an GitHub issue and tell us more about your use case.