YOLO object detection: how does the algorithm predict bounding boxes larger than a grid cell?

前端 未结 3 1601
醉梦人生
醉梦人生 2021-02-15 14:54

I am trying to better understand how the YOLO2 & 3 algorithms works. The algorithm processes a series of convolutions until it gets down to a 13x13 grid. Then i

相关标签:
3条回答
  • 2021-02-15 15:07

    Ok this is not my first time seing this question, had the same problem and infact for all the YOLO 1 & 2 architectures I encountered during my yoloquest, no where did the network-diagrams imply some classification and localization kicked it at the first layer or the moment the image was fed in. It passes through a series of convolution layers and filters(didn't forget the pooling just feel they are the laziest elements in the network plus I hate swimming pools including the words in it).

    • Which implies at basic levels of the network flow information is seen or represented differently i.e. from pixels to outlines, shapes , features etc before the object is correctly classified or localised just as in any normal CNN

      Since the tensor representing the bounding box predictions and classifications is located towards the end of the network(I see regression with backpropagation). I believe it is more appropriate to say that the network:

      1. divides the image into cells(actually the author of the network did this with the training label datasets)
      2. for each cell divided, tries to predict bounding boxes with confidence scores(I believe the convolution and filters right after the cell divisions are responsible for being able to correctly have the network predict bounding boxes larger than each cell because they feed on more than one cell at a time if you look at the complete YOLO architecture, there's no incomplete one).

      So to conclude, my take on it is that the network predicts larger bounding boxes for a cell and not that each cell does this i.e. The network can be viewed as a normal CNN that has outputs for each classification + number of bounding boxes per cell whose sole goal is to apply convolutions and feature maps to detect, classify and localise objects with a forward pass.

    forward pass implying neighbouring cells in the division don't query other cells backwardly/recursively, prediction of larger bounding boxes are by next feature maps and convolutions connected to receptive areas of previous cell divisions. also the box being centroidal is a function of the training data, if it's changed to top-leftiness it wouldn't be centroidal(forgive the grammar).

    0 讨论(0)
  • 2021-02-15 15:09

    everything outside of the grid cell should be unknown to the neurons predicting the bounding boxes for an object detected in that cell right.

    It's not quite right. The cells correspond to a partition of the image where the neuron have learned to respond if the center of an object is located within.

    However, the receptive field of those output neurons is much larger than the cell and actually cover the entire image. It is therefore able to recognize and draw a bounding box around an object much larger than its assigned "center cell".

    So a cell is centered on the center of the receptive field of the output neuron but is a much smaller part. It is also somewhat arbitrary, and one could image for example to have overlapping cells -- in which case you would expect neighboring neurons to fire simultaneously when an object is centered in the overlapping zone of their cells.

    0 讨论(0)
  • 2021-02-15 15:15

    YOLO predicts offsets to anchors. The anchors are initialised such that there are 13x13 sets of anchors. (In Yolov3 each set has k=5 anchors, different yolo versions have different k.) The anchors are spread over the image, to make sure objects in all parts are detected.

    The anchors can have an arbitrary size and aspect ratio, unrelated to the grid size. If your dataset has mostly large foreground objects, then you should initialise your anchors to be large. YOLO learns better if it only has to make small adjustments to the anchors.

    Each prediction actually uses information from the whole image. Often context from the rest of the image helps the prediction. e.g. black pixels below a vehicle could be either tyres or shadow.

    The algorithm doesn't really "know" in which cell the centre of the object is located. But during trainig we have that information from the ground truth, and we can train it to guess. With enough training, it ends up pretty good at guessing. The way that works is that the closest anchor to the ground truth is assigned to the object. Other anchors are assigned to the other objects or to the background. Anchors assigned to the background are supposed to have a low confidence, while anchors assigned to an object are assessed for the IoU of their bounding boxes. So the training reinforces one anchor to give a high confidence and an accurate bounding box, while other anchors give a low confidence. The example in your question doesn't include any predictions with low confidence (probably trying to keep things simple) but actually there will be many many more low confidence predictions than high confidence ones.

    0 讨论(0)
提交回复
热议问题