I am doing some image enhancement experiments so I take photos from my cheap camera. The camera has mosaic artifacts and all images look like grid. I think pillbox (out-of-focus) kernel and Gaussian kernel would not be the best candidates. Any suggestions?
EDIT:
Sample
I suspect this cannot be done via a constant kernel, because the effects on pixels are not the same (so there are "grids").
The effects are non linear. (And probably non-stationary), so you cannot simply invert the convolution and enhance the image -- if you could, the camera chip would do it on-board.
The best way to work out what the convolution is (or at least an approximation to it) might be to take photos of known patterns, compute, and working in 2D frequency/laplace domain divide the resulting spectra to get a linear approximation to the filter.
I suspect that the convolution you discover by doing this will be very context dependant -- so the best way to enhance an image might be to divide it into tiles, classify each region of the image as belonging to a different set (for each of which you could work out a different linear approximation to the convolution, based on test data), and then deconvolve each separately.
来源:https://stackoverflow.com/questions/5972952/the-blurring-kernel-of-a-low-quality-camera