问题
I am trying to build an app that will help visually impaired individuals detect objects/hurdles in their way. So using the TensorFlow library and the android text-to-speech once an object is detected, the application will let the user know what the object is. I'm currently trying to build off the Android Object Detection Example provided by TensorFlow, however I'm struggling to find where the strings of the labels of the bounding boxes are stored so that I can call this when running the text-to-speech
回答1:
I saw the project of Object detection. You can find the results of the inference in 2 places inside project:
First you can find them inside
TFLiteObjectDetectionAPIModel.java
There you can add a log statement at line 227 for
recognitions object
for example
Log.i("Recognitions", String.valueOf(recognitions.get(0).getTitle()));
Second inside
DetectorActivity.java
You can log
results object
at line 181.
Then you can follow this example to integrate TtS. I am a little pesimist of the result because MultiboxTracker gives a lot of results in every second....and I don't know the performance if a lot of objects are detected!!
You have to filter some results.
I am really interested on the result!!
If you need more help tag me
Happy coding!
来源:https://stackoverflow.com/questions/62623547/how-can-i-add-text-to-speech-in-tensorflow-lite-object-detection-android-based-a