Using some criterion, there are some pixels in the image which I\'m not interested in. So, I would like to neglect them. I just want to ask if the approach I have followed i
If your data type allows it, like signed integer (CV_32S) or floating point (CV_32F or CV_64F), it makes perfect sense to use negative values and it is a very common way to specify ignored pixels. In this case there is no special meaning to a negative value except your interpretation of it.
On the other hand, if you use 8-bit unsigned images (CV_8U), this may lead to errors : it may either be truncated to zero or converted to [0,255] using modulo-256 addition, depending on your version of opencv. In the worst case, it may also overflow to neighbor pixels and modify their values, if you access pixel data in a very bad way. So if you work with 8-bit images, you should rather use a mask to specify ignored pixels, as phyrox explained.
It depends on how you use the image. A negative value in a pixel doesn't have any real representation. But if you use Matlab function imshow(img,[])
it will scale all the values considering -1 as the lowest number (so it will be represented in the output).
It is preferible to use a mask. A mask is a binary array of the same size of the image that indicates if a pixel is valid (1) or not (0).
For example, in OpenCV there are a lot of functions that can use a mask (last argument const CvArr* mask = NULL
).
Here you have an example on how to use a mask in OpenCV:
Mat srcImage; //RGB source image
//Create a mask. Here we select a rectangle:
Mat mask = Mat::zeros(srcImage.size(), CV_8U); // type of mask is CV_8U
Mat roi(mask, cv::Rect(10,10,100,100));
roi = Scalar(255, 255, 255);
//Apply any function to the srcImage ONLY in the points selected by a mask
SurfFeatureDetector detector();
std::vector<KeyPoint> keypoints;
detector.detect(srcImage, keypoints, mask); // passing `mask` as a parameter