In OpenCV, is there a difference between CV_8U and CV_8UC1? Do they both refer to an 8-bit unsigned type with one channel? If so, why are there two names? If not, what\'s the di
They should be the same. For me, I prefer to use CV_8UC1
since it makes my code more clear that how many number of channels I am working with.
However, if you are dealing with a matrix that has 10 channels or more, you need to specify the number of channels.
You may want to experiment with the number of channels using the code snippet below.
#define CV_MAT_ELEM_CN( mat, elemtype, row, col ) \
(*(elemtype*)((mat).data.ptr + (size_t)(mat).step*(row) + sizeof(elemtype)*(col)))
...
CvMat *M = cvCreateMat(4, 4, CV_32FC(10));
for(int ch = 0; ch < 10; ch++) {
for(int i = 0; i < 4; i++) {
for(int j = 0; j < 4; j++) {
CV_MAT_ELEM_CN(*M, float, i, j * CV_MAT_CN(M->type) + ch) = 0.0;
cout << CV_MAT_ELEM_CN(*M, float, i, j * CV_MAT_CN(M->type) + ch) << " ";
}
}
cout << endl << endl;
}
cvReleaseMat(&M);
credit: http://note.sonots.com/OpenCV/MatrixOperations.html
You can see from this answer, they evaluate to identical types.
As for why there are two names, if you look at how the #defines are structured (again, see linked answer), a type in OpenCV has 2 parts, the depth, and the number of channels. The system is flexible enough to let you define new types with up to 512 channels. It just so happens that when you specify 1 channel, the channel component of type is set to 0 which makes the result equivalent to simply using the depth CV_8U.