I am writing an application that requires me to take a proprietary bitmap format (an MVTec Halcon HImage) and convert it into a System.Drawing.Bitmap in C#.
The only
Correct code:
public static void GetStride(int width, PixelFormat format, ref int stride, ref int bytesPerPixel)
{
//int bitsPerPixel = ((int)format & 0xff00) >> 8;
int bitsPerPixel = System.Drawing.Image.GetPixelFormatSize(format);
bytesPerPixel = (bitsPerPixel + 7) / 8;
stride = 4 * ((width * bytesPerPixel + 3) / 4);
}
As has been stated before by Jake you calculate the stride by finding the bytes per pixel (2 for 16 bit, 4 for 32 bit) and then multiplying it by the width. So if you have a width of 111 and a 32 bit image you would have 444 which is a multiple of 4.
However, let's say for a minute that you have a 24 bit image. 24 bit is equal to 3 bytes, so with a 111 pixel width you would have 333 as your stride. This is, obviously, not a multiple of 4. So you would want to round up to 336 (the next highest multiple of 4). Even though you have a bit of extra, this unused space is not significant enough to really make much of a difference in most applications.
Unfortunately, there is no way around this restriction (unless you always use 32 bit or 64 bit imagines, which are always multiples of 4.
A much easier way is to just make the image with the (width, height, pixelformat)
constructor. Then it takes care of the stride itself.
Then, you can just use LockBits
to copy your image data into it, line by line, without bothering with the Stride stuff yourself; you can literally just request that from the BitmapData
object. For the actual copy operation, for each scanline, you just increase the target pointer by the stride, and the source pointer by your line data width.
Here's an example where I got the image data in a byte array. If that's completely compact data, your input stride is normally just the image width multiplied by the amount of bytes per pixel. If it's 8-bit paletted data, it's simply exactly the width.
If the image data was extracted from an image object, you should've stored the original stride from that extraction process in exactly the same way, by getting it out of the BitmapData
object.
/// <summary>
/// Creates a bitmap based on data, width, height, stride and pixel format.
/// </summary>
/// <param name="sourceData">Byte array of raw source data</param>
/// <param name="width">Width of the image</param>
/// <param name="height">Height of the image</param>
/// <param name="stride">Scanline length inside the data</param>
/// <param name="pixelFormat">Pixel format</param>
/// <param name="palette">Color palette</param>
/// <param name="defaultColor">Default color to fill in on the palette if the given colors don't fully fill it.</param>
/// <returns>The new image</returns>
public static Bitmap BuildImage(Byte[] sourceData, Int32 width, Int32 height, Int32 stride, PixelFormat pixelFormat, Color[] palette, Color? defaultColor)
{
Bitmap newImage = new Bitmap(width, height, pixelFormat);
BitmapData targetData = newImage.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.WriteOnly, newImage.PixelFormat);
Int32 newDataWidth = ((Image.GetPixelFormatSize(pixelFormat) * width) + 7) / 8;
// Compensate for possible negative stride on BMP format.
Boolean isFlipped = stride < 0;
stride = Math.Abs(stride);
// Cache these to avoid unnecessary getter calls.
Int32 targetStride = targetData.Stride;
Int64 scan0 = targetData.Scan0.ToInt64();
for (Int32 y = 0; y < height; y++)
Marshal.Copy(sourceData, y * stride, new IntPtr(scan0 + y * targetStride), newDataWidth);
newImage.UnlockBits(targetData);
// Fix negative stride on BMP format.
if (isFlipped)
newImage.RotateFlip(RotateFlipType.Rotate180FlipX);
// For indexed images, set the palette.
if ((pixelFormat & PixelFormat.Indexed) != 0 && palette != null)
{
ColorPalette pal = newImage.Palette;
for (Int32 i = 0; i < pal.Entries.Length; i++)
{
if (i < palette.Length)
pal.Entries[i] = palette[i];
else if (defaultColor.HasValue)
pal.Entries[i] = defaultColor.Value;
else
break;
}
newImage.Palette = pal;
}
return newImage;
}
Remember stride
is different from width
. You can have an image that has 111 (8-bit) pixels per line, but each line is stored in memory 112 bytes.
This is done to make efficient use of memory and as @Ian said, it's storing the data in int32
.
This goes back to early CPU designs. The fastest way to crunch through the bits of the bitmap is by reading them 32-bits at a time, starting at the start of a scan line. That works best when the first byte of the scan line is aligned on a 32-bit address boundary. In other words, an address that's a multiple of 4. On early CPUs, having that first byte mis-aligned would cost extra CPU cycles to read two 32-bit words from RAM and shuffle the bytes to create the 32-bit value. Ensuring each scan line starts at an aligned address (automatic if the stride is a multiple of 4) avoids that.
This isn't a real concern anymore on modern CPUs, now alignment to the cache line boundary is much more important. Nevertheless, the multiple of 4 requirement for stride stuck around for appcompat reasons.
Btw, you can easily calculate the stride from the format and width with this:
int bitsPerPixel = ((int)format & 0xff00) >> 8;
int bytesPerPixel = (bitsPerPixel + 7) / 8;
int stride = 4 * ((width * bytesPerPixel + 3) / 4);
Because it's using int32 to store each pixel.
Sizeof(int32) = 4
But don't worry, when the image is saved from memory to file it will use the most efficient memory usage possible. Internally it uses 24 bits per pixel (8 bits red, 8 green and 8 blue) and leaves the last 8 bits redundant.