Load a Tensorflow graph once and use it multiple times?

二次信任 提交于 2021-02-08 10:28:57

问题


I have a TF model that was saved with the tf.train.Saver class, and so I have the .meta file, the data-00000-of-00001 file, the index file and the checkpoint file.

I use it for inference like this :

    graph_num = tf.Graph()
    with graph_num.as_default():
        sess = tf.Session()
        with sess.as_default():

            new_saver = tf.train.import_meta_graph('{}.meta'.format(model_path), clear_devices=True)
            new_saver.restore(sess, ('{}'.format(model_path)))
            sess.run(tf.tables_initializer())

            arr_placeholder = graph_num.get_operation_by_name('arr_placeholder/inp_array').outputs[0]
            str_placeholder = graph_num.get_operation_by_name('str_placeholder/inp_string').outputs[0]
            dropout_keep_prob = graph_num.get_operation_by_name('dropout_keep_prob/keep_prob').outputs[0]

            logis = graph_num.get_tensor_by_name('logits/preds/BiasAdd:0')

            def model_api(input_data):
                # ...preprocessing the input_data...

                a = sess.run(tf.nn.softmax(logis),
                             feed_dict={
                                 arr_placeholder:
                                     np.array(list_of_primary_inputs).reshape(len(list_of_primary_inputs), 142),
                                 dropout_keep_prob: 1.0, str_placeholder: place_holder_list
                             })

                return a

so far so good, but then I call the function like this :

tf.reset_default_graph()
result = model_api(test_input_data)

and each time I call it, it gives me different results for the same test data.

But when I instantiate the graph again and then call the function, it gives me the same numbers.

This behaviour is rather odd, and I don't want to re load the graph every time I want to pass in a new instance(s).

I can't use a for loop within the session, because the instances to be predicted come in real time, and so I have to use a function that supports arguments.

I saw this post too : Reuse graph, and use it multiple times but that wasn't helping my case.

I tried freezing the graph ( converting the existing meta graph into .pb ) but that too was giving me an error with one of the lookup tables that I have. And that is filed as a separate issue on GitHub, and unfortunately the workaround ( more of a hack ) mentioned there didn't work for me. That issue is still open : https://github.com/tensorflow/tensorflow/issues/8665

I have even set tf.set_random_seed to a constant value while training, and tried doing the same to the inference part as well, but to no avail.

So right now I'm at my wits end.

Why does it give me different results each time? And is there a way to load the graph once, and then keep running new instances without running into this issue of inconsistent outputs?

来源:https://stackoverflow.com/questions/50812641/load-a-tensorflow-graph-once-and-use-it-multiple-times

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!