I\'ve been using tensorflow for a while now. At first I had stuff like this:
def myModel(training):
with tf.scope_variables(\'model\', reuse=not training
tf.estimator.Estimator classes indeed create a new graph for each invocation and this has been the subject of furious debates, see this issue on GitHub. Their approach is to build the graph from scratch on each train
, evaluate
and predict
invocations and restore the model from the last checkpoint. There are clear downsides of this approach, for example:
train
and evaluate
will create two new graphs on every iteration.train_and_evaluate
, but this doesn't look very nice).I tend to agree that having the same graph and model for all actions is convenient and I usually go with this solution. But in a lot of cases when using a high-level API like tf.estimator.Estimator
, you don't deal with the graph and variables directly, so you shouldn't care how exactly the model is organized.