问题
I have an OpenGL texture with UV map on it. I've read about using the alpha channel to store some other value which saves needing to load an extra map from somewhere. For example, you could store specular info (shininess), or an emission map in the alpha since you only need a float for that and the alpha isn't being used.
So I tried it. Writing the shader isn't the problem. I have all that part worked out. The problem is just getting all 4 channels in to the texture like I want.
I have all the maps so in PSD I put the base map in the rgb and the emissions map in the a. But when you save as png the alpha either doesn't save (if you add it as a new channel) or it trashes the rgb by premultiplying the transparency to the rgb (if you apply the map as a mask).
Apparently PNG files support transparency but not alpha channels per se. So there doesn't appear to be a way to control all 4 channels.
But I have read about doing this. So what format can I save it in from PSD that I can load with my image loader in the iPhone?
NSString *path = [[NSBundle mainBundle] pathForResource:name ofType:type];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
Does this method accept other file formats? Like TIFF which would allow me to control all 4 channels?
I could use texturetool to make a PVR.. but from the docs it appears to also take a PNG as input.
EDIT:
First to be clear this is in the iPhone.
It might be psd's fault. Like I said, there are two ways to set up the document in my version of psd (cc 14.2 mac) that I can find. One is to manually add a new channel and paste the maps in there. It shows up as a red overlay. The second is to add a mask, option click it and paste the alpha in there. In that case it shows it with the alpa as a transparency with the checkerboard in the alpha zero areas. When I save as png the alpha option greys out.
And when I load the png back in to psd it appears to be premultiplied. I can't get back to my full rgb data in photoshop.
Is there a different tool I can use to merge the two maps into a png that will store it png-32?
TIFF won't work cause it doesn't store alpha either. Maybe I was thinking of TGA.
I also noticed this in my loader...
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef thisContext = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
if (flipImage)
{
CGContextTranslateCTM (thisContext, 0, height);
CGContextScaleCTM (thisContext, 1.0, -1.0);
}
CGColorSpaceRelease( colorSpace );
CGContextClearRect( thisContext, CGRectMake( 0, 0, width, height ) );
CGContextDrawImage( thisContext, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
glBindTexture(GL_TEXTURE_2D, textureInfo[texIndex].texture);
When I create that context the option is kCGImageAlphaPremultipliedLast
.
Maybe I do need to try the glkit loader, but it appears that my png is premultiplied.
回答1:
It is possible to create a PNG with an alpha channel, but you will not be able to read that PNG image using the builtin iOS APIs without a premultiplication. The core issue is that CoreGraphics only supports premultiplied alpha for performance reasons. You also have to be careful to disable Xcode's optimization of PNGs attached to the project file because it does the premultiplication at compile time. What you could do is compile and link in your own copy of libpng after turning off the Xcode PNG processing, and then read the file directly with libpng at the C level. But, honestly this is kind of a waste of time. Just save one image with the RGB values and another as grayscale with the alpha values as 0-255 grayscale values. You can have those grayscale values mean anything you want and you will not have to worry about premult messing things up. Your opengl code will just need to read from multiple textures, not a big deal.
来源:https://stackoverflow.com/questions/21752582/how-to-encode-emission-or-specular-info-in-the-alpha-of-a-open-gl-texture