Graph transform gives error in Tensorflow

寵の児 提交于 2019-12-23 01:13:11

问题


I am using tensorflow 1.1 version. I want to quantize inception_resnet_v2 model. The quantization method using

bazel build tensorflow/tools/quantization/tools:quantize_graph
bazel-bin/tensorflow/tools/quantization/tools/quantize_graph \
  --input=/tmp/classify_image_graph_def.pb \
  --output_node_names="softmax" --output=/tmp/quantized_graph.pb \
  --mode=eightbit

this doesn't give accurate results. For inception_v3 the results are okay but for inception_resnet_v2 it doesn't work (0% accuracy for the predicted class labels).

I got to know that I can rather use graph_transform in my case to quantise. Like described in https://github.com/tensorflow/tensorflow/issues/9301#issuecomment-307351419.

using

bazel-bin/tensorflow/tools/graph_transforms/transform_graph 
--in_graph=./frozen_model_inception_resnet_v2.pb 
--out_graph=./quantized_weights_and_nodes_inception_resnet_v2.pb 
--inputs='Placeholder_only' 
--outputs='InceptionResnetV2/Logits/Predictions' 
--transforms='
add_default_attributes
strip_unused_nodes(type=float, shape="1,299,299,3")
remove_nodes(op=Identity, op=CheckNumerics)
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms
quantize_weights
quantize_nodes
strip_unused_nodes
sort_by_execution_order'

However, I get error "ValueError: No op named QuantizedAdd in defined operations" now when tf.import_graph_def(graph_def, name='') is called.

I checked similar issues and solution to the same. However, it is not helping in my case, I still get error. Here are the links to similar issues.

Error with 8-bit Quantization in Tensorflow

Install Tensorflow with Quantization Support

In my case _quantized_ops.so and kernels/_quantized_kernels.so are not created after doing bazel build for quantize_graph.

any inputs to resolve this issue?

来源:https://stackoverflow.com/questions/44492936/graph-transform-gives-error-in-tensorflow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!