object-detection-api

How to get the multiple bounding box coordinates in tensorflow object-detection API

半世苍凉 提交于 2019-12-11 14:52:02
问题 I want to get the multiple bounding boxes co-ordinates and the class of each bounding box and return it as a JSON file. when I print boxes[] from the following code, It has a shape of (1,300,4). There are 300 coordinates in boxes[]. But there are only 2 on my predicted image. I want the coordinates of the bounding boxes which are predicted on my image. Also, how would we know which bounding box is mapped to which category/class in the image? for example, let's say I have a dog and a person in

How to only detect humans in object detection API Tensorflow

a 夏天 提交于 2019-12-11 02:42:11
问题 I am using tensorflow object detection API to detect objects. It is working fine in my windows system. How can I make changes in it to only detect mentioned objects, for example, I only want to detect humans and not all the objects. As per the 1st comment in this answer, I have checked the visualization file but didn't find anything related to categories of objects. Then I looked into category_util.py and found out that there is csv file from which all the categories are being loaded but

Tensorflow object detection API : Multiple Objects coordinates

江枫思渺然 提交于 2019-12-10 18:26:29
问题 I'm using tensorflow for object detection in the webcam and further I have to identify the coordinates of each of the identified object in the image. When I print the bounding boxes i.e. "boxes:" what i see a array of array and if I pull the first array within boxes array it gives [ymin,xmin,ymax,xmax] which is the coordinates of first object what i guess. My Question: if there are three objects identified: Person, Chari, Backpack then from the boxes array how to get the coordinates for each

'Parsing Inputs… Incomplete shape' error while exporting the inference graph in Tensorflow

无人久伴 提交于 2019-12-10 10:56:33
问题 I am training a neural network using Tensorflow's object detetction API to detect cars. I used the following sentdex's youtube video to learn and execute the process. https://www.youtube.com/watch?v=srPndLNMMpk&t=65s Also text version of his videos: https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/ Part 1 to 6 of his series. My training data has ~300 images Test

Tensorflow object detection API not displaying global steps

我是研究僧i 提交于 2019-12-09 15:21:22
问题 I am new here. I recently started working with object detection and decided to use the Tensorflow object detection API. But, when I start training the model, it does not display the global step like it should, although it's still training in the background. Details: I am training on a server and accessing it using OpenSSH on Windows. I trained a custom dataset, by collecting pictures and labeling them. I trained it using model_main.py. Also, until a couple of months back, the API was a little

Managing classes in tensorflow object detection API

醉酒当歌 提交于 2019-12-09 04:29:31
I'm working on a project that requires the recognition of just people in a video or a live stream from a camera. I'm currently using the tensorflow object recognition API with python, and i've tried different pre-trained models and frozen inference graphs. I want to recognize only people and maybe cars so i don't need my neural network to recognize all 90 classes that come with the frozen inference graphs, based on mobilenet or rcnn, as it seems this slows the process, and 89 of this 90 classes are not needed in my project. Do i have to train my own model or is there a way to modify the

Managing classes in tensorflow object detection API

假装没事ソ 提交于 2019-12-08 07:16:26
问题 I'm working on a project that requires the recognition of just people in a video or a live stream from a camera. I'm currently using the tensorflow object recognition API with python, and i've tried different pre-trained models and frozen inference graphs. I want to recognize only people and maybe cars so i don't need my neural network to recognize all 90 classes that come with the frozen inference graphs, based on mobilenet or rcnn, as it seems this slows the process, and 89 of this 90

How to fix “the following classes have no ground truth examples” when running object_detection/model_main.py?

走远了吗. 提交于 2019-12-08 05:09:40
问题 I have defined a pascal_label_map.pbtext with 824 classes to create TFRecord files from my JPEG dataset with Pascal VOC style annotations with create_pascal_tf_record.py . The script seems to generate these TFRecords correctly (e.g. I checked that all classes from pascal_label_map.pbtext occur in the annotations and that each JPEG comes with the correct annotation). But when I start object_detection/model_main.py I see the following: WARNING:root:The following classes have no ground truth

How to initialize weight for convolution layers in Tensorflow Object Detection API?

被刻印的时光 ゝ 提交于 2019-12-08 05:02:49
问题 I followed this tutorial for implementing Tensorflow Object Detection API. The preferred way is using pretrained models. But for some cases, we need to train from scratch. For that we just need to comment out two lines in the configuration file as #fine_tune_checkpoint: "object_detection/data/mobilenet_v1_1.0_224/mobilenet_v1_1.0_224.ckpt" #from_detection_checkpoint: true If I want to initialize weight with Xavier weight initialization, how can I do that? 回答1: As you can see in the

tensorflow object detection API(Calculate Car speeds.)

最后都变了- 提交于 2019-12-08 04:51:17
问题 I used tensoflow object detection API to count the number of cars detected. But now i want to calculate the speed of all the cars detected. My question is is there any way to do this using tensorflow object detection API. 回答1: You have keep track of the location of the cars with respect to pixels in intervals of time. You can start and stop the recording of time using 'time' library. Also, if you are planning to calculate car speeds, you have to take the relative speed with respect to the