Drawing CGImageRef in YUV

偶尔善良 提交于 2019-12-23 03:33:08

问题


I am using the code here to convert a CGImageRef into a CVPixelBufferRef on OS X .

Convert UIImage to CVImageBufferRef

However, I need the image to be drawn in YUV (kCVPixelFormatType_420YpCbCr8Planar) instead of RBG as it is now.

Is there anyway to directly draw a CGImage in YUV colorspace? And if not, does anyone have an example for the best way to go about converting the CVPixedBufferRef from RBG into YUV?

I understand the formulas for the conversion but doing it on the CPU is painfully slow.


回答1:


Figured it out using:

CVPixelBufferRef converted_frame;

CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_420YpCbCr8Planar, 0, &converted_frame);

VTPixelTransferSessionTransferImage(_vtpt_ref, imageBuffer, converted_frame);

Where imageBuffer is the source CVPixedBufferRef




回答2:


This is a late answer and not intended to get votes or anything. There is a Accelerate framework method to convert from RGB to YUV, the code is a bit complex but it is working and included since working examples are difficult to find. This is wrapped up as a class that extends CIFilter, but it would be easy to adapt if you want to do something different. This code contains no memory leaks and should perform very well with repeated invocations. The thing to note that is really useful is that the implementation creates a CVPixelBufferRef and then sets all the needed properties so that a later call to vImageCVImageFormat_CreateWithCVPixelBuffer() works properly. The code renders RGB data to a CoreVideo buffer and then wraps the YUV result image and returns it like any other CoreImage filter.

//
//  CoreImageToYUVConverter.h
//
//  Make use of CoreImage to convert a RGB input image into YUV data where
//  UV is sumsampled and Y is the same dimensions as the original data.

#import <Foundation/Foundation.h>

#import <CoreImage/CoreImage.h>

@interface CoreImageToYUVConverter : CIFilter

@property (nonatomic, retain) CIImage *inputImage;

// If there is an error while processing the filter, this value is
// set to non-nil. Otherwise it is set to nil.

@property (nonatomic, retain) NSError *error;

// Dimension of the output image, not that Y is 2x
// the dimensions of u and v buffer so the Y image
// must have even width and height.

@property (nonatomic, assign) CGSize size;

@end

// CoreImageToYUVConverter.m

#import "CoreImageToYUVConverter.h"

@import Accelerate;

@interface CoreImageToYUVConverter ()

@property (nonatomic, retain) CIContext *coreImageContext;

@property (nonatomic, copy) NSNumber *inputWidth;

@property (nonatomic, copy) NSNumber *inputAspectRatio;

@property (nonatomic, assign) CVPixelBufferRef pixelBuffer;

@end

@implementation CoreImageToYUVConverter

@synthesize coreImageContext = m_coreImageContext;
@synthesize pixelBuffer = m_pixelBuffer;


- (void) deallocate
{
  self.pixelBuffer = NULL;
}

// Setter for self.rgbBuffer, this logic holds on to a retain for the CoreVideo buffer

- (void) setPixelBuffer:(CVImageBufferRef)cvBufferRef
{
  if (cvBufferRef) {
    CFRetain(cvBufferRef);
  }
  if (self->m_pixelBuffer) {
    CFRelease(self->m_pixelBuffer);
  }
  self->m_pixelBuffer = cvBufferRef;
}

- (CIImage *)outputImage
{
  self.error = nil;

  NSParameterAssert(self.inputImage != nil && [self.inputImage isKindOfClass:[CIImage class]]);

  CIImage *inputImage = self.inputImage;

  [self renderIntoYUVBuffer:inputImage];

  CIImage *outCIImage = [CIImage imageWithCVImageBuffer:self.pixelBuffer];

  return outCIImage;
}

- (NSDictionary *)customAttributes
{
  return @{
           kCIInputWidthKey : @{kCIAttributeDefault : @(0), kCIAttributeType : kCIAttributeTypeScalar},
           kCIInputAspectRatioKey : @{kCIAttributeDefault : @(0), kCIAttributeType : kCIAttributeTypeScalar},
           };
}

- (void) renderIntoYUVBuffer:(CIImage*)inputImage
{
  CGRect imageExtent = inputImage.extent;
  int width = (int) imageExtent.size.width;
  int height = (int) imageExtent.size.height;

  // Extract a CGImageRef from CIImage, this will flatten pixels possibly from
  // multiple steps of a CoreImage chain.

  if (self.coreImageContext == nil) {
    CIContext *context = [CIContext contextWithOptions:nil];
    NSAssert(context != nil, @"CIContext contextWithOptions failed");
    self.coreImageContext = context;
  }

  CGImageRef inCGImageRef = [self.coreImageContext createCGImage:inputImage fromRect:imageExtent];

  NSDictionary *pixelAttributes = @{
                                    (__bridge NSString*)kCVPixelBufferIOSurfacePropertiesKey : @{},
                                    (__bridge NSString*)kCVPixelFormatOpenGLESCompatibility : @(YES),
                                    (__bridge NSString*)kCVPixelBufferCGImageCompatibilityKey : @(YES),
                                    (__bridge NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey : @(YES),
                                    };

  CVPixelBufferRef cvPixelBuffer = NULL;

  uint32_t yuvImageFormatType;
  //yuvImageFormatType = kCVPixelFormatType_420YpCbCr8BiPlanarFullRange; // luma (0, 255)
  yuvImageFormatType = kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange; // luma (16, 235)

  CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                        width,
                                        height,
                                        yuvImageFormatType,
                                        (__bridge CFDictionaryRef)(pixelAttributes),
                                        &cvPixelBuffer);

  NSAssert(result == kCVReturnSuccess, @"CVPixelBufferCreate failed");

  // FIXME: UHDTV : HEVC uses kCGColorSpaceITUR_2020

  CGColorSpaceRef yuvColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceITUR_709);

  {
    // Attach colorspace info to pixel buffer

    //CFDataRef colorProfileData = CGColorSpaceCopyICCProfile(yuvColorSpace); // deprecated
    CFDataRef colorProfileData = CGColorSpaceCopyICCData(yuvColorSpace);

    NSDictionary *pbAttachments = @{
      (__bridge NSString*)kCVImageBufferYCbCrMatrixKey: (__bridge NSString*)kCVImageBufferYCbCrMatrix_ITU_R_709_2,
      (__bridge NSString*)kCVImageBufferColorPrimariesKey: (__bridge NSString*)kCVImageBufferColorPrimaries_ITU_R_709_2,
      (__bridge NSString*)kCVImageBufferTransferFunctionKey: (__bridge NSString*)kCVImageBufferTransferFunction_ITU_R_709_2,

      (__bridge NSString*)kCVImageBufferICCProfileKey: (__bridge NSData *)colorProfileData,

      (__bridge NSString*)kCVImageBufferChromaLocationTopFieldKey: (__bridge NSString*)kCVImageBufferChromaLocation_Center,
      (__bridge NSString*)kCVImageBufferAlphaChannelIsOpaque: (id)kCFBooleanTrue,
    };

    CVBufferRef pixelBuffer = cvPixelBuffer;

    CVBufferSetAttachments(pixelBuffer, (__bridge CFDictionaryRef)pbAttachments, kCVAttachmentMode_ShouldPropagate);

    // Drop ref to NSDictionary to enable explicit checking of ref count of colorProfileData, after the
    // release below the colorProfileData must be 1.
    pbAttachments = nil;
    CFRelease(colorProfileData);
  }

  // Note that this setter will implicitly release an earlier held ref to a pixel buffer
  self.pixelBuffer = cvPixelBuffer;

  vImageCVImageFormatRef cvImgFormatRef;

  cvImgFormatRef = vImageCVImageFormat_CreateWithCVPixelBuffer(cvPixelBuffer);

  // vImage_CGImageFormat for input RGB

  // FIXME: Need to select sRGB if running under MacOSX
  //CGColorSpaceRef defaultColorspaceRef = CGColorSpaceCreateDeviceRGB();

  // Default to sRGB on both MacOSX and iOS
  CGColorSpaceRef defaultColorspaceRef = NULL;

  vImage_CGImageFormat rgbCGImgFormat = {
    .bitsPerComponent = 8,
    .bitsPerPixel = 32,
    .bitmapInfo = (CGBitmapInfo)(kCGBitmapByteOrder32Host | kCGImageAlphaNoneSkipFirst),
    .colorSpace = defaultColorspaceRef,
  };

  // Copy input CoreGraphic image into a CoreVideo buffer

  vImage_Buffer sourceBuffer;

  const CGFloat backgroundColor = 0.0f;

  vImage_Flags flags = 0;
  flags = kvImagePrintDiagnosticsToConsole;

  vImage_Error err;

  err = vImageBuffer_InitWithCGImage(&sourceBuffer, &rgbCGImgFormat, &backgroundColor, inCGImageRef, flags);

  NSAssert(err == kvImageNoError, @"vImageBuffer_InitWithCGImage failed");

  err = vImageBuffer_CopyToCVPixelBuffer(&sourceBuffer, &rgbCGImgFormat, cvPixelBuffer, cvImgFormatRef, &backgroundColor, flags);

  NSAssert(err == kvImageNoError, @"error in vImageBuffer_CopyToCVPixelBuffer %d", (int)err);

  // Manually free() the allocated buffer

  free(sourceBuffer.data);

  vImageCVImageFormat_Release(cvImgFormatRef);
  CVPixelBufferRelease(cvPixelBuffer);
  CGColorSpaceRelease(yuvColorSpace);
  CGColorSpaceRelease(defaultColorspaceRef);
  CGImageRelease(inCGImageRef);
}

@end


来源:https://stackoverflow.com/questions/34672459/drawing-cgimageref-in-yuv

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!