I\'m trying to run a tensorflow graph to train a model and periodically evaluate using a separate evaluation dataset. Both training and evaluation data is implemented using
Have you read the last section of this link about multi inputs?
I think you can add a is_training
argument to your input function to distinguish training data from eval data.
Then you can reuse sharing variables to get the logits for eval data and build a op for eval.
Then in your graph, run valudation_accuracy=sess.run(eval_op)
to get eval accuracy.
Update:
Hi, from my understanding,if you want to train for n batches, evaluate, train, evaluate, you can keep there two ops in the same graph, no need to build a new one. Assume you have already build all the needed function, then the code should like this:
#the following two steps will add train and eval input queue to the graph
train_inputs,train_labels = inputs(is_train=True)
eval_inputs,eval_labels = inputs(is_train=False)
with tf.variable_scope("inference") as scope:
train_logits = inference(train_inputs)
scope.reuse_variables()
eval_logits = inference(eval_inputs)
loss = loss(train_logits,train_labels)
eval_accuracy = accuracy(eval_logits,eval_labels)
#...add train op here,start queue runner and train it ...