I use this code to reduce the depth of an image:
public void ApplyDecreaseColourDepth(int offset)
{
int A, R, G, B;
Color pixelColor;
for (int y = 0; y < bitmapImage.Height; y++)
{
for (int x = 0; x < bitmapImage.Width; x++)
{
pixelColor = bitmapImage.GetPixel(x, y);
A = pixelColor.A;
R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2)) % offset) - 1);
if (R < 0)
{
R = 0;
}
G = ((pixelColor.G + (offset / 2)) - ((pixelColor.G + (offset / 2)) % offset) - 1);
if (G < 0)
{
G = 0;
}
B = ((pixelColor.B + (offset / 2)) - ((pixelColor.B + (offset / 2)) % offset) - 1);
if (B < 0)
{
B = 0;
}
bitmapImage.SetPixel(x, y, Color.FromArgb(A, R, G, B));
}
}
}
first question is: the offset that I give the function is not the depth, is that right?
the second is that when I try to save the image after I reduce the depth of its colors, I get the same size of the original Image. Isn't it logical that I should get a file with a less size, or I am wrong.
This is the code that I use to save the modified image:
private Bitmap bitmapImage;
public void SaveImage(string path)
{
bitmapImage.Save(path);
}
Let's start by cleaning up the code a bit. The following pattern:
R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2)) % offset) - 1);
if (R < 0)
{
R = 0;
}
Is equivalent to this:
R = Math.Max(0, (pixelColor.R + offset / 2) / offset * offset - 1);
You can thus simplify your function to this:
public void ApplyDecreaseColourDepth(int offset)
{
for (int y = 0; y < bitmapImage.Height; y++)
{
for (int x = 0; x < bitmapImage.Width; x++)
{
int pixelColor = bitmapImage.GetPixel(x, y);
int A = pixel.A;
int R = Math.Max(0, (pixelColor.R + offset / 2) / offset * offset - 1);
int G = Math.Max(0, (pixelColor.G + offset / 2) / offset * offset - 1);
int B = Math.Max(0, (pixelColor.B + offset / 2) / offset * offset - 1);
bitmapImage.SetPixel(x, y, Color.FromArgb(A, R, G, B));
}
}
}
To answer your questions:
- Correct; the offset is the size of the steps in the step function. The depth per color component is the original depth minus log2(offset). For example, if the original image has a depth of eight bits per component (bpc) and the offset is 16, then the depth of each component is 8 - log2(16) = 8 - 4 = 4 bpc. Note, however, that this only indicates how much entropy each output component can hold, not how many bits per component will actually be used to store the result.
- The size of the output file depends on the stored color depth and the compression used. Simply reducing the number of distinct values each component can have won't automatically result in fewer bits being used per component, so an uncompressed image won't shrink unless you explicitly choose an encoding that uses fewer bits per component. If you are saving a compressed format such as PNG, you might see an improvement with the transformed image, or you might not; it depends on the content of the image. Images with a lot of flat untextured areas, such as line art drawings, will see negligible improvement, whereas photos will probably benefit noticeably from the transform (albeit at the expense of perceptual quality).
You are just setting the pixel values to a lower level.
For example, is if a pixel is represented by 3 channels with 16 bits per channel, you are reducing each pixel colour value to 8-bits per channel. This will never reduce the image size as the pixels allocated have already a fixed depth of 16 bits.
Try saving the new values to a new image with maximum of 8-bit depth.
Surely you will have a reduced image in bytes but not the overall size that is, X,Y dimensions of the image will remain intact. What you are doing will reduce image quality.
First i would like to ask you one simple question :)
int i = 10;
and now i = i--;
douse it effect on size of i ? ans is No.
you are doing the same thing
Index imaged are represent in two matrix 1 for color mapping and 2 fro image mapping
you just change the value of element not deleting it so it will not effect on size of image
You can't decrease color depth with Get/SetPixel. Those methods only change the color.
It seems you can't easily save an image to a certain pixel format, but I did find some code to change the pixel format in memory. You can try saving it, and it might work, depending what format you save to.
From this question: https://stackoverflow.com/a/2379838/785745
He gives this code to change color depth:
public static Bitmap ConvertTo16bpp(Image img) {
var bmp = new Bitmap(img.Width, img.Height, System.Drawing.Imaging.PixelFormat.Format16bppRgb555);
using (var gr = Graphics.FromImage(bmp))
{
gr.DrawImage(img, new Rectangle(0, 0, img.Width, img.Height));
}
return bmp;
}
You can change the PixelFormat in the code to whatever you need.
A Bitmap image of a certain pixel count is always the same size, because the bitmap format does not apply compression. If you compress the image with an algorithm (e.g. JPEG) then the 'reduced' image should be smaller.
R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2))
Doesn't this always return 0?
If you want to reduce the size of your image, you can specify a different compression format when calling Image.Save().
GIF file format is probably a good candidate, since it works best with contiguous pixels of identical color (which happens more often when your color depth is low).
JPEG works great with photos, but you won't see significant results if you convert a 24-bit image into a 16-bit one and then compresses it using JPEG, because of the way the algorithm works (you're better off saving the 24-bit pictures as JPEG directly).
And as others have explained, your code won't reduce the size used by the Image object unless you actually copy the resulting data into another Bitmap object with a different PixelFormat such as Format16bppRgb555
.
来源:https://stackoverflow.com/questions/10140322/reducing-color-depth-in-an-image-is-not-reducin-the-file-size