问题
I have successfully trained (using Inception V3 weights as initialization) the Attention OCR model described here: https://github.com/tensorflow/models/tree/master/attention_ocr and frozen the resulting checkpoint files into a graph. How can this network be implemented using the C++ API on iOS?
Thank you in advance.
回答1:
As suggested by others you can use some existing iOS demos (1, 2) as a starting point, but pay close attention to the following details:
- Make sure you use the right tools to "freeze" the model. The SavedModel is a universal serialization format for Tensorflow models.
- An model export script can and usually do some kind of input normalization. Note that the Model.create_base function expects a tf.float32 tensor of shape [batch_size, height, width, channels] with values normalized to [-1.25, 1.25]. If you do image normalization as part of the TensorFlow computation graph, make sure images are passed unnormalized and vise versa.
To get names of input/output tensors you can simply print them, e.g. somewhere in your export script:
data_images = tf.placeholder(dtype=tf.float32, shape=[batch_size, height, width, channels], name='normalized_input_images') endpoints = model.create_base(data_images, labels_one_hot=None) print(data_images, endpoints.predicted_chars, endpoints.predicted_scores)
来源:https://stackoverflow.com/questions/44990104/implementing-tensorflow-attention-ocr-on-ios