问题
I'm currently trying to build a 16 bit grayscale "gradient" image but my output looks weird so I'm clearly not understanding this correctly. I was hoping somebody could shine some knowledge on my issue. I think that the "bitmap" I write is wrong? But I'm not sure.
#include "CImg.h"
using namespace std;
unsigned short buffer[1250][1250];
void fill_buffer()
{
unsigned short temp_data = 0;
for (int i =0;i < 1250; i++)
{
for (int j =0 ;j < 1250;j++)
{
buffer[i][j] = temp_data;
}
temp_data += 20;
}
}
int main()
{
fill_buffer();
auto hold_arr = (uint8_t *)&buffer[0][0];
cimg_library::CImg<uint8_t> img(hold_arr, 1250, 1250);
img.save_bmp("test.bmp");
return 0;
}
Current Output:
回答1:
You cannot store 16-bit greyscale samples in a BMP... see Wikipedia.
The 16-bit per pixel option in a BMP allows you to store 4 bits of red, 4 bits of green, 4 bits of blue and 4 bits of alpha, but not 16-bits of greyscale.
The 24-bit format allows you to store 1 byte for red, 1 byte for green and one byte for blue, but not 16-bits of greyscale.
The 32-bit BMP allows you to store a 24-bit BMP plus alpha.
You will need to use PNG
, or a NetPBM PGM format, or TIFF
format. PGM
format is great because CImg
can write that without any libraries and you can always use ImageMagick to convert it to anything else, e.g.:
convert image.pgm image.png
or
convert image.pgm image.jpg
This works:
#define cimg_use_png
#define cimg_display 0
#include "CImg.h"
using namespace cimg_library;
using namespace std;
unsigned short buffer[1250][1250];
void fill_buffer()
{
unsigned short temp_data = 0;
for (int i =0;i < 1250; i++)
{
for (int j =0 ;j < 1250;j++)
{
buffer[i][j] = temp_data;
}
temp_data += 65535/1250;
}
}
int main()
{
fill_buffer();
auto hold_arr = (unsigned short*)&buffer[0][0];
cimg_library::CImg<unsigned short> img(hold_arr, 1250, 1250);
img.save_png("test.png");
return 0;
}
Note that when asking CImg
to write a PNG file, you will need to use a command like this (with libpng
and zlib
) to compile:
g++-7 -std=c++11 -O3 -march=native -Dcimg_display=0 -Dcimg_use_png -L /usr/local/lib -lm -lpthread -lpng -lz -o "main" "main.cpp"
Just by way of explanation:
-std=c++11
just sets the C++ standard-O3 -march=native
is only to speed things up and is not strictly required-Dcimg_display=0
means all the X11 headers are not parsed so compilation is quicker - however this means you can't display images from your program so it means you are "head-less"-Dcimg_use_png
means you can read/write PNG images usinglibpng
rather than needing ImageMagick installed-lz -lpng
means the resulting code gets linked with the PNG and ZLIB libraries.
回答2:
You've got an 8 bit vs 16 bit problem. You're writing 16 bit values, but the library is interpreting them as 8 bit. That's the explanation for the dark vertical bars that are visible. It's alternating between the low and high bytes of each value treating them as two separate pixel values.
And the reason for the "gradient venetian blind" effect is again due to only considering the low byte. That'll cycle from 0 to 240 in 12 steps, and then overflow back to 5 on the next step, and so on.
I'm no cimg_library
expert, but a good starting point might be to replace the uint8_t
s with uint16_t
and see what effect that has.
来源:https://stackoverflow.com/questions/51434899/c-16-bit-grayscale-gradient-image-from-2d-array