Object Detection on Android with Tensorflow Lite

。_饼干妹妹 提交于 2021-01-13 11:20:47

问题


Trying to implement a custom object detection model with Tensorflow Lite, using Android Studio. I am following the guidance provided here: Running on mobile with TensorFlow Lite, however with no success. The example model runs properly showing all the detected labels. Nonetheless, when I try with my custom model I am not getting any labels at all. I have also tried with other models(from the internet but the outcome is the same). It is like that the labels are not being passed with the write way. I copied my detect.tflite and labelmap.txt, I changed the TF_OD_API_INPUT_SIZE and the TF_OD_API_IS_QUANTIZED in the DetectorActivity.java but still not getting results(detected class with a bounding box and a score).

The Logcat shows the following:

2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH3 /odm/lib64/hw/gralloc.qcom.so
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH2 /vendor/lib64/hw/gralloc.qcom.so
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH1 /system/lib64/hw/gralloc.qcom.so
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH3 /odm/lib64/hw/gralloc.msm8953.so
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH2 /vendor/lib64/hw/gralloc.msm8953.so
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH1 /system/lib64/hw/gralloc.msm8953.so
2020-10-11 18:37:54.859 31681-31681/org.tensorflow.lite.examples.detection E/tensorflow: CameraActivity: Exception!
    java.lang.IllegalStateException: This model does not contain associated files, and is not a Zip file.
        at org.tensorflow.lite.support.metadata.MetadataExtractor.assertZipFile(MetadataExtractor.java:325)
        at org.tensorflow.lite.support.metadata.MetadataExtractor.getAssociatedFile(MetadataExtractor.java:165)
        at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.create(TFLiteObjectDetectionAPIModel.java:118)
        at org.tensorflow.lite.examples.detection.DetectorActivity.onPreviewSizeChosen(DetectorActivity.java:96)
        at org.tensorflow.lite.examples.detection.CameraActivity.onPreviewFrame(CameraActivity.java:200)
        at android.hardware.Camera$EventHandler.handleMessage(Camera.java:1157)
        at android.os.Handler.dispatchMessage(Handler.java:102)
        at android.os.Looper.loop(Looper.java:165)
        at android.app.ActivityThread.main(ActivityThread.java:6375)
        at java.lang.reflect.Method.invoke(Native Method)
        at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:912)
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:802)

How can I take the detection? Do I need an additional file(metadata) relative with the labels or I am doing sth else wrong? The above case is tested with an Android 7 device. Thanks!


回答1:


This is an issue with this documentation specifically that wasn't updated.

The main problem is that the sample was updated to use models with Metadata attached to it, specifically with the labels embedded as an asset of the model.

When you add your labels file to the model, everything should just work.




回答2:


To better understand the suggested solution proposed by Gusthema, I provide you with the code that worked in my case:

pip install tflite-support

from tflite_support import flatbuffers
from tflite_support import metadata as _metadata
from tflite_support import metadata_schema_py_generated as _metadata_fb


# Creates model info.
model_meta = _metadata_fb.ModelMetadataT()
model_meta.name = "MobileNetV1 image classifier"
model_meta.description = ("Identify Unesco Monuments Route"
                          "image from a set of 18 categories")
model_meta.version = "v1"
model_meta.author = "TensorFlow"
model_meta.license = ("Apache License. Version 2.0 "
                      "http://www.apache.org/licenses/LICENSE-2.0.")


# Creates input info.
input_meta = _metadata_fb.TensorMetadataT()

# Creates output info.
output_meta = _metadata_fb.TensorMetadataT()


input_meta.name = "image"
input_meta.description = (
    "Input image to be classified. The expected image is {0} x {1}, with "
    "three channels (red, blue, and green) per pixel. Each value in the "
    "tensor is a single byte between 0 and 255.".format(300, 300))
input_meta.content = _metadata_fb.ContentT()
input_meta.content.contentProperties = _metadata_fb.ImagePropertiesT()
input_meta.content.contentProperties.colorSpace = (
    _metadata_fb.ColorSpaceType.RGB)
input_meta.content.contentPropertiesType = (
    _metadata_fb.ContentProperties.ImageProperties)
input_normalization = _metadata_fb.ProcessUnitT()
input_normalization.optionsType = (
    _metadata_fb.ProcessUnitOptions.NormalizationOptions)
input_normalization.options = _metadata_fb.NormalizationOptionsT()
input_normalization.options.mean = [127.5]
input_normalization.options.std = [127.5]
input_meta.processUnits = [input_normalization]
input_stats = _metadata_fb.StatsT()
input_stats.max = [255]
input_stats.min = [0]
input_meta.stats = input_stats



# Creates output info.
output_meta = _metadata_fb.TensorMetadataT()
output_meta.name = "probability"
output_meta.description = "Probabilities of the 18 labels respectively."
output_meta.content = _metadata_fb.ContentT()
output_meta.content.content_properties = _metadata_fb.FeaturePropertiesT()
output_meta.content.contentPropertiesType = (
    _metadata_fb.ContentProperties.FeatureProperties)
output_stats = _metadata_fb.StatsT()
output_stats.max = [1.0]
output_stats.min = [0.0]
output_meta.stats = output_stats
label_file = _metadata_fb.AssociatedFileT()
label_file.name = os.path.basename('/content/gdrive/My Drive/models/research/deploy/labelmap.txt')
label_file.description = "Labels for objects that the model can recognize."
label_file.type = _metadata_fb.AssociatedFileType.TENSOR_AXIS_LABELS
output_meta.associatedFiles = [label_file]


# Creates subgraph info.
subgraph = _metadata_fb.SubGraphMetadataT()
subgraph.inputTensorMetadata = [input_meta]
subgraph.outputTensorMetadata = 4*[output_meta]
model_meta.subgraphMetadata = [subgraph]

b = flatbuffers.Builder(0)
b.Finish(
    model_meta.Pack(b),
    _metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER)
metadata_buf = b.Output()


# metadata and the label file are written into the TFLite file
populator = _metadata.MetadataPopulator.with_model_file('/content/gdrive/My Drive/models/research/object_detection/exported_model/detect.tflite')
populator.load_metadata_buffer(metadata_buf)
populator.load_associated_files(['/content/gdrive/My Drive/models/research/deploy/labelmap.txt'])
populator.populate()

Eventually, if you want to create a json file to display the outcome(the metadata file) you can use:

displayer = _metadata.MetadataDisplayer.with_model_file('/content/gdrive/My Drive/models/research/object_detection/exported_model/detect.tflite')
export_json_file = os.path.join('/content/gdrive/My Drive/models/research/object_detection/exported_model',
                    os.path.splitext('detect.tflite')[0] + ".json")
json_file = displayer.get_metadata_json()
# Optional: write out the metadata as a json file
with open(export_json_file, "w") as f:
  f.write(json_file)

P.S.: Be careful to change the few parts of code, so as to be compatible with your needs. (e.g. if you are using images of 512x512 you have to alter it from the "input_meta.description" variable).




回答3:


It looks like a regression there. Could you try it with the following?

<at your TF example repo>
$ git checkout de42482b453de6f7b6488203b20e7eec61ee722e^


来源:https://stackoverflow.com/questions/64315799/object-detection-on-android-with-tensorflow-lite

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!