reading a protobuf created with TF2 using TF1

生来就可爱ヽ(ⅴ<●) 提交于 2021-01-28 21:13:21

问题


I have a model stored as an hdf5 which I export to a protobuf (PB) file using saved_model.save, like this:

from tensorflow import keras
import tensorflow as tf
model = keras.models.load_model("model.hdf5")
tf.saved_model.save(model, './output_dir/')

this works fine and the result is a saved_model.pb file which I can later view with other software with no issues.

However, when I try to import this PB file using TensorFlow1, my code fails. As PB is supposed to be a universal format, this confuses me.

The code I use to read the PB file is this:

import tensorflow as tf
curr_graph = tf.Graph()
curr_sess = tf.InteractiveSession(graph=curr_graph)
f = tf.gfile.GFile('model.hdf5','rb')
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
f.close()

This is the exception I get:

Traceback (most recent call last): File "read_pb.py", line 14, in graph_def.ParseFromString(f.read()) google.protobuf.message.DecodeError: Error parsing message

I have a different model stored as a PB file on which the reading code works fine.

What's going on?

***** EDIT 1 *****

While using Andrea Angeli's code below, I've encountered the following error:

Encountered Error: NodeDef mentions attr 'exponential_avg_factor' not in Op y:T, batch_mean:U, batch_variance:U, reserve_space_1:U, reserve_space_2:U, reserve_space_3:U; attr=T:type,allowed=[DT_HALF, DT_BFLOAT16, DT_FLOAT]; attr=U:type,allowed=[DT_FLOAT]; attr=epsilon:float,default=0.0001; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]; attr=is_training:bool,default=true>; NodeDef: {node u-mobilenetv2/bn_Conv1/FusedBatchNormV3}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

is there a workaround for this?


回答1:


You are trying to read the hdf5 file and not the protobuf file you saved with tf.saved_model.save(..). Also beware, the TF2 exported protobuf is not the same as TF 1's frozen graph as it only contains the computation graph.

Edit 1: If you want to export a TF 1 styled frozen graph from a TF 2 model, it can be done using the following code snippet:

from tensorflow.python.framework import convert_to_constants

def export_to_frozen_pb(model: tf.keras.models.Model, path: str) -> None:
    """
    Creates a frozen graph from a keras model.

    Turns the weights of a model into constants and saves the resulting graph into a protobuf file.

    Args:
        model: tf.keras.Model to convert into a frozen graph
        path: Path to save the profobuf file
    """
    inference_func = tf.function(lambda input: model(input))

    concrete_func = inference_func.get_concrete_function(tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))
    output_func = convert_to_constants.convert_variables_to_constants_v2(concrete_func)

    graph_def = output_func.graph.as_graph_def()
    graph_def.node[-1].name = 'output'

    with open(os.path.join(path, 'saved_model.pb'), 'wb') as freezed_pb:
        freezed_pb.write(graph_def.SerializeToString())

This will result in a protobuf file (saved_model.pb) at the location you specify in path param. Your graph's input node will have the name "input:0" (this is achieved by the lambda) and the output node "output:0".



来源:https://stackoverflow.com/questions/61577982/reading-a-protobuf-created-with-tf2-using-tf1

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!