I am performing feature detection in a video using MATLAB. The lighting condition varies in different parts of the video, leading to some parts getting ignored while transforming the RGB images to binary images.
The lighting condition in a particular portion of the video also changes over the course of the video.
Can you suggest best method in MATLAB to balance the lighting across the frame and the video?
You have two options, depending on what features you want to detect and what you want to do with the video.
- Ignore the illumination of the images because (as you have concluded) this contains useless or even misleading information for your feature detection.
- Try to repair the illumination unevenness (which is what you ask for).
1) Is quite easy to do: Convert your image to a colourspace that separates out illumination in a separate channel such as: HSV (ignore the V channel) Lab (ignore L) YUV (ignore Y) and perform your feature detection on the two remaining channels. Of these HSV is the best (as noted by Yves Daoust in the comments) YUV and Lab leave some illumination information in the UV / ab channels. In my experience the last two also work depending on your situation, but HSV is best.
2) Is harder. I'd start by converting the image to HSV. Then you do the reparation on just the V channel:
- Apply a gaussian blur to the V channel image with a very large value for sigma. This gives you a local average for the illumination. Compute the global average V value for this image (this is one number). Then Subtract the local average value from the actual V value for each pixel and add the global average. You have now done very crude illumination equalization. You can play around a bit with the value for sigma to find a value that works best.
- If this fails, look into the options zenopy gives in his answer.
Whichever method you choose, I advise you to concentrate on what you want to do (i.e. detect features) and choose intermediate steps such as this one that suffice for your needs. So quickly try something, see how this helps your feature detection,
That is not a trivial task but there are many ways to try and overcome it. I can recommaend that you start with implementing the retinex algorithm, or use an implementation of others: http://www.cs.sfu.ca/~colour/publications/IST-2000/.
The basic idea is that the Luminance (observed image intensity) = Illumination (incident light) x Reflectance (percent reflected):
L(x,y) = I(x,y) x R(x,y)
And you are interested in the R part.
To work on color images for each frame first move to hsv color space and operate the retinex on the v (value) part.
Hope that makes sense.
Aside from illumination unevenness across individual images, which is addressed by Retinex or by highpass filtering, you can think of Automatic Gain Correction across the video.
The idea is to normalise the image intensities by applying a linear transform to the color components, in such a way that the average and standard deviations of all three channels combined become predefined values (average -> 128, standard deviation -> 64).
Histogram equalization will have a similar effet of "standardizing" the intensity levels.
Unfortunately, large scene changes will impact this process in such a way that the intensities of the background won't remain constant as you'd expect them.
来源:https://stackoverflow.com/questions/9292726/how-to-correct-uneven-illumination-in-images-using-matlab