问题
This is an algorithm question on the detectMultiScale(...) function from the opencv library. I need help to understand what opencv's detectMulitScale() function exactly does.
I have understood from reading the C++ code that the source image is scaled with several scales based on scaleFactor and size() parameters. This is the most outer loop of detectMultiScale. What I did not get is how the inner loops are organised.
The LBP feature is computed always on a sub rectangle of 20x20 (in my case). Now I wonder how the 20x20 rectangle is shifted over the source image. Is this really done for every source pixel ? This would result in (960-20)*(540-20) evaluations of sub rectangles ? I don't think it is done this way...
Can anybody shed light on how this is done ? Thanks
来源:https://stackoverflow.com/questions/59404386/detectmultiscale-internal-principle