Why is the color space of the CV::Mat image wrong (GBR instead of RGB or BGR)?

时光总嘲笑我的痴心妄想 提交于 2021-01-28 04:19:17

问题


I have a module in Python which sends an RGB to C++ and there its consumed. The image however, has wrong colorspace, regardless of what I do. That is I tried to convert it to RGB, assuming its still in BGR (although in python its deliberately converted into RGB by doing :return img[:,:,::-1] and visualize the image using matplotlib.) and vice versa, they look the same!

This is shown below:

Original image:

This is the output of the image without any tampering

This is the output when the I use colorspace conversion from BGR2RGB:

cv::Mat img2;
cv::cvtColor(img, img2, cv::COLOR_BGR2RGB);
cv::imshow(title + " type:" + image_type, img2);

And this is what I get when I try to convert as a RGB2BGR:

cv::Mat img2;
cv::cvtColor(img, img2, cv::COLOR_RGB2BGR);
cv::imshow(title + " type:" + image_type, img2);

And as you can see the last two are identical. So I tried to manually change the channels and see if I can get this to work. The code snippet:

cv::Mat img2;
cv::cvtColor(img, img2, cv::COLOR_RGB2BGR);
//cv::cvtColor(img, img2, cv::COLOR_BGR2RGB);
for (int i = 0; i < img.rows; i++) {
    for (int j = 0; j < img.cols; j++) {
        img2.at<cv::Vec3b>(i, j)[2] = img.at<cv::Vec3b>(i, j)[0];
        img2.at<cv::Vec3b>(i, j)[0] = img.at<cv::Vec3b>(i, j)[1];
        img2.at<cv::Vec3b>(i, j)[1] = img.at<cv::Vec3b>(i, j)[2];
    }
}

cv::imshow(title + " type:" + image_type, img2);

Here is the result:

Clearly this is the RGB and everything looks good and it seems the image was in GBR format! I have no idea what is causing this.

Also I noticed, I must do a conversion, or else the following for loops will give me memory access violations! which is strange to me! Changing the BGR2RGB or RGB2BGR in the cvtColor doesn't have any effect and they act the same.

I would also like to know if there is a better way for getting this into the right colorspace, something that doesn't require a for loop like the one I wrote and uses hardware acceleration? Since this operation needs to be fast but using the current solution that I have is not good at all.

Edit

By the way this is the Python function I was calling in my C++:

def detect_and_align_all(pil_img):
    """gets the input image, and returns all the faces aligned as a list of rgb images 
    """
    bboxes, landmarks = detect_faces(pil_img)
    # convert to ndarray and then make rgb to bgr
    cv_img = np.array(pil_img)[:, :, ::-1].copy()
    img_list = []
    for landmark in landmarks:
        img = align_face(cv_img, landmark)
        img_list.append(img[:, :, ::-1])

    return img_list

and align_face ultimately calls this:

return cv2.warpAffine(cv_img, tfm, (crop_size[0], crop_size[1]))

Update 1

These are the snippets (taken from here) that I use to send the image from c++ to python using Pybind11:

py::dtype determine_np_dtype(int depth)
{
    switch (depth) 
    {
    case CV_8U: return py::dtype::of<uint8_t>();
    case CV_8S: return py::dtype::of<int8_t>();
    case CV_16U: return py::dtype::of<uint16_t>();
    case CV_16S: return py::dtype::of<int16_t>();
    case CV_32S: return py::dtype::of<int32_t>();
    case CV_32F: return py::dtype::of<float>();
    case CV_64F: return py::dtype::of<double>();
    default:
        throw std::invalid_argument("Unsupported data type.");
    }
}

std::vector<std::size_t> determine_shape(cv::Mat& m)
{
    if (m.channels() == 1) {
        return {
            static_cast<size_t>(m.rows)
            , static_cast<size_t>(m.cols)
        };
    }

    return {
        static_cast<size_t>(m.rows)
        , static_cast<size_t>(m.cols)
        , static_cast<size_t>(m.channels())
    };
}

py::capsule make_capsule(cv::Mat& m)
{
    return py::capsule(new cv::Mat(m)
        , [](void* v) { delete reinterpret_cast<cv::Mat*>(v); }
    );
}

py::array mat_to_nparray(cv::Mat& m)
{
    if (!m.isContinuous()) {
        throw std::invalid_argument("Only continuous Mats supported.");
    }

    return py::array(determine_np_dtype(m.depth())
        , determine_shape(m)
        , m.data
        , make_capsule(m));
}

and used like this :

py::scoped_interpreter guard{};
auto module = py::module::import("MyPackage.align_faces");
auto aligner = module.attr("align_all");
auto pil_converter = module.attr("cv_to_pil");

auto img = cv::imread("image1.jpg");
auto img_pil = pilConvertor(mat_to_nparray(img));
auto img_face_list = aligner(img_pil);

Update 2

Thanks to @DanMasek in the comments, by storing the copy of images on the python side, just before sending them back to C++, the above issue is solved. However, as Ext3h pointed out in the comments, there is an artifact in the image as well which is not gone even after the latest change.

I saved two images (bgr mode) both in Python and C++ using imwrite, the artifact that shows up here, doesn't show up in these images. however, there is a slight difference between the two images. the cpp image is a bit zoomed in compared to the python version (they both use the very same function (in fact c++ calls the very same python module for this)). and the Python version is also bigger in size:

cpp image(size: 5,492 bytes):

Python image(size: (5,587 bytes)

What's the issue here? the codes that I used, does not alter the strides/offsets of any kind, in fact) then what's stemming this issue?

来源:https://stackoverflow.com/questions/63266610/why-is-the-color-space-of-the-cvmat-image-wrong-gbr-instead-of-rgb-or-bgr

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!