How do I do high quality scaling of a image?

后端 未结 11 1000
温柔的废话
温柔的废话 2021-01-30 15:19

I\'m writing some code to scale a 32 bit RGBA image in C/C++. I have written a few attempts that have been somewhat successful, but they\'re slow and most importantly the qualit

相关标签:
11条回答
  • 2021-01-30 15:35

    Now that I see your original image, I think that OpenGL is using a nearest neighbor algorithm. Not only is it the simplest possible way to resize, but it's also the quickest. The only downside is that it looks very rough if there's any detail in your original image.

    The idea is to take evenly spaced samples from your original image; in your case, 55 out of 256, or one out of every 4.6545. Just round the number to get the pixel to choose.

    0 讨论(0)
  • 2021-01-30 15:37

    A generic article from our beloved host: Better Image Resizing, discussing the relative qualities of various algorithms (and it links to another CodeProject article).

    0 讨论(0)
  • 2021-01-30 15:38

    I've found the wxWidgets implementation fairly straightforward to modify as required. It is all C++ so no problems with portability there. The only difference is that their implementation works with unsigned char arrays (which I find to be the easiest way to deal with images anyhow) with a byte order of RGB and the alpha component in a separate array.

    If you refer to the "src/common/image.cpp" file in the wxWidgets source tree there is a down-sampler function which uses a box sampling method "wxImage::ResampleBox" and an up-scaler function called "wxImage::ResampleBicubic".

    0 讨论(0)
  • 2021-01-30 15:45

    Is it possible that OpenGL is doing the scaling in the vector domain? If so, there is no way that any pixel-based scaling is going to be near it in quality. This is the big advantage of vector based images.

    The bicubic algorithm can be tuned for sharpness vs. artifacts - I'm trying to find a link, I'll edit it in when I do.

    Edit: It was the Mitchell-Netravali work that I was thinking of, which is referenced at the bottom of this link:

    http://www.cg.tuwien.ac.at/~theussl/DA/node11.html

    You might also look into Lanczos resampling as an alternative to bicubic.

    0 讨论(0)
  • 2021-01-30 15:45

    Try using the Adobe Generic Image Library ( http://opensource.adobe.com/wiki/display/gil/Downloads ) if you want something ready and not only an algorithm.


    Extract from: http://www.catenary.com/howto/enlarge.html#c

    Enlarge or Reduce - the C Source Code Requires Victor Image Processing Library for 32-bit Windows v 5.3 or higher.

    
    int enlarge_or_reduce(imgdes *image1)
    {
       imgdes timage;
       int dx, dy, rcode, pct = 83; // 83% percent of original size
    
       // Allocate space for the new image
       dx = (int)(((long)(image1->endx - image1->stx + 1)) * pct / 100);
       dy = (int)(((long)(image1->endy - image1->sty + 1)) * pct / 100);
       if((rcode = allocimage(&timage, dx, dy,
          image1->bmh->biBitCount)) == NO_ERROR) {
          // Resize Image into timage
          if((rcode = resizeex(image1, &timage, 1)) == NO_ERROR) {
             // Success, free source image
             freeimage(image1);
             // Assign timage to image1
             copyimgdes(&timage, image1);
             }
          else // Error in resizing image, release timage memory
             freeimage(&timage);
          }
       return(rcode);
    }
    

    This example resizes an image area and replaces the original image with the new image.

    0 讨论(0)
提交回复
热议问题