color depth reduction with opencv and LUT

你。 提交于 2020-01-22 12:45:46

问题


I'd like to perform a color reduction via color depth scaling.

Like this example:

the first image is CGA resolution, the second is EGA, the third is HAM. I'd like to do it with cv::LUT because i think it is the betterway to do it. I can do with greyscale with this code:

Mat img = imread("test1.jpg", 0);
uchar* p;
Mat lookUpTable(1, 256, CV_8U);
p = lookUpTable.data;
for( int i = 0; i < 256; ++i)
    p[i] = 16 * (i/16)
LUT(img, lookUpTable, reduced);

original:

color reduced:

but if i try to do it with color I get strange result..

with this code:

imgColor = imread("test1.jpg");
Mat reducedColor;
int n = 16;
for (int i=0; i<256; i++) {
    uchar value = floor(i/n) * n;
    cout << (int)value << endl;
    lut.at<Vec3b>(i)[2]= (value >> 16) & 0xff;
    lut.at<Vec3b>(i)[1]= (value >> 8) & 0xff;
    lut.at<Vec3b>(i)[0]= value & 0xff;
} 
LUT(imgColor, lut, reducedColor);

回答1:


You'll probably have moved on by now, but the root of the problem is that you are doing a 16-bit shift to uchar value, which is just 8-bits long. Even an 8-bit shift in this case is too much, as you'll erase all the bits in the uchar. Then there is the fact that the cv::LUT documentation explicitly states that src must be an "input array of 8-bit elements", which clearly isn't the case in your code. The net result is that only the first channel of the color image (the Blue channel) is transformed by cv::LUT.

The best way to work around these limitations is to split color images across channels, transform each channel separately, and then merge the transformed channels into a new color image. See the code below:

/*
Calculates a table of 256 assignments with the given number of distinct values.

Values are taken at equal intervals from the ranges [0, 128) and [128, 256),
such that both 0 and 255 are always included in the range.
*/
cv::Mat lookupTable(int levels) {
    int factor = 256 / levels;
    cv::Mat table(1, 256, CV_8U);
    uchar *p = table.data;

    for(int i = 0; i < 128; ++i) {
        p[i] = factor * (i / factor);
    }

    for(int i = 128; i < 256; ++i) {
        p[i] = factor * (1 + (i / factor)) - 1;
    }

    return table;
}

/*
Truncates channel levels in the given image to the given number of
equally-spaced values.

Arguments:

image
    Input multi-channel image. The specific color space is not
    important, as long as all channels are encoded from 0 to 255.

levels
    The number of distinct values for the channels of the output
    image. Output values are drawn from the range [0, 255] from
    the extremes inwards, resulting in a nearly equally-spaced scale
    where the smallest and largest values are always 0 and 255.

Returns:

Multi-channel images with values truncated to the specified number of
distinct levels.
*/
cv::Mat colorReduce(const cv::Mat &image, int levels) {
    cv::Mat table = lookupTable(levels);

    std::vector<cv::Mat> c;
    cv::split(image, c);
    for (std::vector<cv::Mat>::iterator i = c.begin(), n = c.end(); i != n; ++i) {
        cv::Mat &channel = *i;
        cv::LUT(channel.clone(), table, channel);
    }

    cv::Mat reduced;
    cv::merge(c, reduced);
    return reduced;
}



回答2:


Both i and n are integers, therefore i/n is an integer. Perhaps you want it converted to double ((double)i/n) before taking the floor and multiplying by n?



来源:https://stackoverflow.com/questions/14812397/color-depth-reduction-with-opencv-and-lut

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!