object-detection

Parameters of detectMultiScale in OpenCV using Python

女生的网名这么多〃 提交于 2020-08-21 04:43:13
问题 I am not able to understand the parameters passed to detectMultiScale. I know that the general syntax is detectMultiScale(image, rejectLevels, levelWeights) However, what do the parameters rejectLevels and levelWeights mean? And what are the optimal values used for detecting objects? I want to use this to detect pupil of the eye 回答1: A code example can be found here: http://docs.opencv.org/3.1.0/d7/d8b/tutorial_py_face_detection.html#gsc.tab=0 Regarding the parameter descriptions, you may

tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[12] = 12 is not in [0, 0)

北城余情 提交于 2020-08-10 20:11:17
问题 I am trying to write the equivalent of this code which converts CSV to TF records but instead, I am trying to convert from JSON to TFrecords. I am trying to generate TFrecords for using it in object detection API. Here is my full error message Traceback (most recent call last): File "model_main_tf2.py", line 113, in <module> tf.compat.v1.app.run() File "C:\ProgramData\anaconda3\envs\4_SOA_OD_v2\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run _run(main=main, argv=argv,

Could not install pycocotools in windows: fatal error C1083: Cannot open include file: 'io.h': No such file or directory error:

被刻印的时光 ゝ 提交于 2020-08-03 08:03:33
问题 I'm new to machine learning and have started with a windows 8.1 pc having GeForce GTX 540M. I followed this tutorial to get started with the object detection models. I built my own dataset and tried to train it as per the tutorial but with "ssd_mobilenet_v1_coco_2017_11_17" model. But could not perform it successfully as i had troubles with the "train.py" file given in the tutorial. So I googled and found that we have to use "model_main.py" to train the model. While trying to train using

How do i know my --output_arrays in tflite_convert

我与影子孤独终老i 提交于 2020-07-21 18:44:08
问题 I'm trying to convert my .pb to .tflite using tflite_convert How do i know my --output_arrays ? I'm using the ssd_mobilenet_v2_coco_2018_03_29 this is my current code: tflite_convert --output_file=C:/tensorflow1/models/research/object_detection/inference_graph/detect.tflite --graph_def_file=C:/tensorflow1/models/research/object_detection/inference_graph/tflite_graph.pb --inference_type=FLOAT --inference_input_type=QUANTIZED_UINT8 --input_arrays=ImageTensor --input_shapes=1,513,513,3 --output

Can't get out from this hole: can't use pre-learnt model's output

爷,独闯天下 提交于 2020-07-09 06:41:48
问题 I use opencv to do object-detection on a raspberry pi 4. Downloaded this tutorial from https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb and tried to convert to opencv to run it locally an take images from webcam. I set the webcam to a 640x480 resolution than apply some transform to adapt the image to 300x300x3 because this shoud be the right input to feed the model. #crop the image to a square image = image[0:480,84:564] #now the image

How can I add text-to-speech in tensorflow lite object detection android based application?

别等时光非礼了梦想. 提交于 2020-06-29 06:43:45
问题 I am trying to build an app that will help visually impaired individuals detect objects/hurdles in their way. So using the TensorFlow library and the android text-to-speech once an object is detected, the application will let the user know what the object is. I'm currently trying to build off the Android Object Detection Example provided by TensorFlow, however I'm struggling to find where the strings of the labels of the bounding boxes are stored so that I can call this when running the text

Hand detection and tracking methods

余生长醉 提交于 2020-06-25 18:15:51
问题 So, guys, please help me with detecting/tracking hand for user who are sitting at the computer in front of computer(laptop) frontal camera. I've tried these methods: Colour based detection (I've detected the human face by opencv haar cascade face detection and extracted the skin HSV ranges. In the next I've found the objects with the skin colour. For example, the face I can remove by knowing face detection by haar cascade, but what about other human body parts and background objects with skin

How to convert model trained on custom data-set for the Edge TPU board?

て烟熏妆下的殇ゞ 提交于 2020-06-17 15:20:24
问题 I have trained my custom data-set using the Tensor Flow Object Detection API. I run my "prediction" script and it works fine on the GPU. Now , I want to convert the model to lite and run it on the Google Coral Edge TPU Board to detect my custom objects. I have gone through the documentation that Google Coral Board Website provides but I found it very confusing. How to convert and run it on the Google Coral Edge TPU Board? Thanks 回答1: Without reading the documentation, it will be very hard to