I\'m having trouble getting a rendered video\'s colors to match the source content\'s colors. I\'m rendering images into a CGContext, converting the backing data into a CVPi
This is quite a confusing subject and the Apple docs really do not help all that much. I am going to describe the solution I have settled on based on using the BT.709 colorspace, I am sure someone will have an objection based on Colorimetric correctness and the weirdness of various video standards, but this is complex topic. First off, don't use kCVPixelFormatType_32ARGB as the pixel type. Always pass kCVPixelFormatType_32BGRA instead, since BGRA is the native pixel layout on both MacOSX and iPhone hardware and it BGRA is just faster. Next, when you create a CGBitmapContext to render into use the BT.709 colorspace (kCGColorSpaceITUR_709). Also, don't render into a malloc() buffer, render directly into the CoreVideo pixel buffer by creating a bitmap context over the same memory, CoreGraphics will handle the colorspace and gamma conversion from whatever your input image is to BT.709 and its associated gamma. Then you need to tell AVFoundation the colorspace of the pixel buffer, do that by making an ICC profile copy and setting the kCVImageBufferICCProfileKey on the CoreVideo pixel buffer. That takes care of your issues 1 and 2, you do not need to have input images in this same colorspace with this approach. Now, this is of course complex and actual working source code (yes actually working) is hard to come by. Here is a github link to a small project that does these exact steps, the code is BSD licensed, so feel free to use it. Note specifically the H264Encoder class which wraps all this horror up into a reusable module. You can find calling code in encode_h264.m, it is a little MacOSX command line util to encode PNG to M4V. Also attached 3 keys Apple docs related to this subject 1, 2, 3.
MetalBT709Decoder