问题
I am new to Python, and trying to use PIL to perform a parsing task I need for an Arduino project. This question pertains to the Image.convert()
method and the options for color palettes, dithering etc.
I've got some hardware capable of displaying images with only 16 colors at a time (but they can be specified RGB triplets). So, I'd like to automate the task of taking an arbitrary true-color PNG image, choosing an "optimum" 16-color palette to represent it, and converting the image to a palettized one containing ONLY 16 colors.
I want to use dithering. The problem is, the image.convert()
method seems to be acting a bit funky. Its arguments aren't completely documented (PIL documentation for Image.convert()) so I don't know if it's my fault or if the method is buggy.
A simple version of my code follows:
import Image
MyImageTrueColor = Image.new('RGB',100,100) # or whatever dimension...
# I paste some images from several other PNG files in using MyImageTrueColor.paste()
MyImageDithered = MyImageTrueColor.convert(mode='P',
colors=16,
dither=1
)
Based on some searches I did (e.g.: How to reduce color palette with PIL) I would think this method should do what I want, but no luck. It dithers the image, but yields an image with more than 16 colors.
Just to make sure, I removed the "dither" argument. Same output.
I re-added the "dither=1" argument and threw in the Image.ADAPTIVE argument (as shown in the link above) just to see what happened. This resulted in an image that contained 16 colors, but NO dithering.
Am I missing something here? Is PIL buggy? The solution I came up with was to perform 2 steps, but that seems sloppy and unnecessary. I want to figure out how to do this right :-) For completeness, here's the version of my code that yields the correct result - but it does it in a sloppy way. (The first step results in a dithered image with >16 colors, and the second results in an image containing only 16 colors.)
MyImage_intermediate = MyImageTrueColor.convert(mode='P',
colors=16
)
MyImageDithered = MyImage_intermediate.convert(mode='P',
colors=16,
dither=1,
palette=Image.ADAPTIVE
)
Thanks!
回答1:
Well, you're not calling things properly, so it shouldn't be working… but even if we were calling things right, I'm not sure it would work.
First, the "official" free version of the PIL Handbook is both incomplete and out of date; the draft version at http://effbot.org/imagingbook/image.htm is less incomplete and out of date.
im.convert(“P”, **options) ⇒ image
Same, but provides better control when converting an “RGB” image to an 8-bit palette image. Available options are:
dither=. Controls dithering. The default is FLOYDSTEINBERG, which distributes errors to neighboring pixels. To disable dithering, use NONE.
palette=. Controls palette generation. The default is WEB, which is the standard 216-color “web palette”. To use an optimized palette, use ADAPTIVE.
colors=. Controls the number of colors used for the palette when palette is ADAPTIVE. Defaults to the maximum value, 256 colors.
So, first, you can't use colors
without ADAPTIVE
—for obvious reason: the only other choice is WEB
, which only handles a fixed 216-color palette.
And second, you can't pass 1
to dither
. That might work if it happened to be the value of FLOYDSTEINBERG
, but that's 3
. So, you're passing an undocumented value; who knows what that will do? Especially since, looking through all of the constants that sound like possible names for dithering algorithms, none of them have the value 1.
So, you could try changing it to dither=Image.FLOYDSTEINBERG
(along with palette=Image.ADAPTIVE
) and see if that makes a difference.
But, looking at the code, it looks like this isn't going to do any good:
if mode == "P" and palette == ADAPTIVE:
im = self.im.quantize(colors)
return self._new(im)
This happens before we get to the dithering code. So it's exactly the same as calling the (now deprecated/private) method quantize.
Multiple threads suggest that the high-level convert
function was only intended to expose "dither to web palette" or "map to nearest N colors". That seems to have changed slightly with 1.1.6 and beyond, but the documentation and implementation are both still incomplete. At http://comments.gmane.org/gmane.comp.python.image/2947 one of the devs recommends reading the PIL/Image.py source.
So, it looks like that's what you need to do. Whatever Image.convert
does in Image.WEB
mode, you want to do that—but with the palette that would be generated by Image.quantize(colors)
, not the web palette.
Of course most of the guts of that happens in the C code (under self.im.quantize
, self.im.convert
, etc.), but you may be able to do something like this pseudocode:
dummy = img.convert(mode='P', paletter='ADAPTIVE', colors=16)
intermediate = img.copy()
intermediate.setpalette(dummy.palette)
dithered = intermediate._new(intermediate.im.convert('P', Image.FLOYDSTEINBERG))
Then again, you may not. You may need to look at the C headers or even source to find out. Or maybe ask on the PIL mailing list.
PS, if you're not familiar with PIL's guts, img.im
is the C imaging object underneath the PIL Image object img
. From my past experience, this isn't clear the first 3 times you skim through PIL code, and then suddenly everything makes a lot more sense.
来源:https://stackoverflow.com/questions/12645492/pil-dithering-desired-but-restricting-color-palette-causes-problems