I am trying to implement difference of guassians (DoG), for a specific case of edge detection. As the name of the algorithm suggests, it is actually fairly straightforward:
I know this post is old. But the question is interresting and may interrest future readers. As far as I know, a DoG filter is not separable. So there is two solutions left: 1) compute both convolutions by calling the function GaussianBlur() twice then subtract the two images 2) Make a kernel by computing the difference of two gaussian kernels then convolve it with the image.
About which solution is faster: The solution 2 seems faster at first sight because it convolves the image only once. But this does not involve a separable filter. On the contrary, the first solution involves two separable filter and may be faster finaly. (I do not know how the OpenCV function GaussianBlur() is optimised and whether it uses separable filters or not. But it is likely.)
However, if one uses FFT technique to convolve, the second solution is surely faster. If anyone has any advice to add or wishes to correct me, please do.