How to turn a CVPixelBuffer into a UIImage?

前端 未结 6 1749
梦毁少年i
梦毁少年i 2020-11-28 21:55

I\'m having some problems getting a UIIMage from a CVPixelBuffer. This is what I am trying:

CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imag         


        
相关标签:
6条回答
  • 2020-11-28 22:17

    A modern solution would be

    let image = UIImage(ciImage: CIImage(cvPixelBuffer: YOUR_BUFFER))
    
    0 讨论(0)
  • 2020-11-28 22:18

    Try this one in Swift.

    Swift 4.2:

    import VideoToolbox
    
    extension UIImage {
        public convenience init?(pixelBuffer: CVPixelBuffer) {
            var cgImage: CGImage?
            VTCreateCGImageFromCVPixelBuffer(pixelBuffer, nil, &cgImage)
    
            guard let cgImage = cgImage else {
                return nil
            }
    
            self.init(cgImage: cgImage)
        }
    }
    

    Swift 5:

    import VideoToolbox
    
    extension UIImage {
        public convenience init?(pixelBuffer: CVPixelBuffer) {
            var cgImage: CGImage?
            VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)
    
            guard let cgImage = cgImage else {
                return nil
            } 
    
            self.init(cgImage: cgImage)
        }
    }
    

    Note: This only works for RGB pixel buffers, not for grayscale.

    0 讨论(0)
  • 2020-11-28 22:19

    First of all the obvious stuff that doesn't relate directly to your question: AVCaptureVideoPreviewLayer is the cheapest way to pipe video from either of the cameras into an independent view if that's where the data is coming from and you've no immediate plans to modify it. You don't have to do any pushing yourself, the preview layer is directly connected to the AVCaptureSession and updates itself.

    I have to admit to lacking confidence about the central question. There's a semantic difference between a CIImage and the other two types of image — a CIImage is a recipe for an image and is not necessarily backed by pixels. It can be something like "take the pixels from here, transform like this, apply this filter, transform like this, merge with this other image, apply this filter". The system doesn't know what a CIImage looks like until you chose to render it. It also doesn't inherently know the appropriate bounds in which to rasterise it.

    UIImage purports merely to wrap a CIImage. It doesn't convert it to pixels. Presumably UIImageView should achieve that, but if so then I can't seem to find where you'd supply the appropriate output rectangle.

    I've had success just dodging around the issue with:

    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    
    CIContext *temporaryContext = [CIContext contextWithOptions:nil];
    CGImageRef videoImage = [temporaryContext
                       createCGImage:ciImage
                       fromRect:CGRectMake(0, 0, 
                              CVPixelBufferGetWidth(pixelBuffer),
                              CVPixelBufferGetHeight(pixelBuffer))];
    
    UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
    CGImageRelease(videoImage);
    

    With gives an obvious opportunity to specify the output rectangle. I'm sure there's a route through without using a CGImage as an intermediary so please don't assume this solution is best practice.

    0 讨论(0)
  • 2020-11-28 22:20

    Another way to get an UIImage. Performs ~10 times faster, at least in my case:

    int w = CVPixelBufferGetWidth(pixelBuffer);
    int h = CVPixelBufferGetHeight(pixelBuffer);
    int r = CVPixelBufferGetBytesPerRow(pixelBuffer);
    int bytesPerPixel = r/w;
    
    unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
    
    UIGraphicsBeginImageContext(CGSizeMake(w, h));
    
    CGContextRef c = UIGraphicsGetCurrentContext();
    
    unsigned char* data = CGBitmapContextGetData(c);
    if (data != NULL) {
       int maxY = h;
       for(int y = 0; y<maxY; y++) {
          for(int x = 0; x<w; x++) {
             int offset = bytesPerPixel*((w*y)+x);
             data[offset] = buffer[offset];     // R
             data[offset+1] = buffer[offset+1]; // G
             data[offset+2] = buffer[offset+2]; // B
             data[offset+3] = buffer[offset+3]; // A
          }
       }
    } 
    UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
    
    UIGraphicsEndImageContext();
    
    0 讨论(0)
  • 2020-11-28 22:27

    Unless your image data is in some different format that requires swizzle or conversion - i would recommend no incrementing of anything... just smack the data into your context memory area with memcpy as in:

    //not here... unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
    
    UIGraphicsBeginImageContext(CGSizeMake(w, h));
    
    CGContextRef c = UIGraphicsGetCurrentContext();
    
    void *ctxData = CGBitmapContextGetData(c);
    
    // MUST READ-WRITE LOCK THE PIXEL BUFFER!!!!
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    void *pxData = CVPixelBufferGetBaseAddress(pixelBuffer);
    memcpy(ctxData, pxData, 4 * w * h);
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    
    ... and so on...
    
    0 讨论(0)
  • 2020-11-28 22:43

    The previous methods led me to have CG Raster Data leak. This method of conversion did not leak for me:

    @autoreleasepool {
    
        CGImageRef cgImage = NULL;
        OSStatus res = CreateCGImageFromCVPixelBuffer(pixelBuffer,&cgImage);
        if (res == noErr){
            UIImage *image= [UIImage imageWithCGImage:cgImage scale:1.0 orientation:UIImageOrientationUp];
    
        }
        CGImageRelease(cgImage);
    }
    
    
        static OSStatus CreateCGImageFromCVPixelBuffer(CVPixelBufferRef pixelBuffer, CGImageRef *imageOut)
        {
            OSStatus err = noErr;
            OSType sourcePixelFormat;
            size_t width, height, sourceRowBytes;
            void *sourceBaseAddr = NULL;
            CGBitmapInfo bitmapInfo;
            CGColorSpaceRef colorspace = NULL;
            CGDataProviderRef provider = NULL;
            CGImageRef image = NULL;
    
            sourcePixelFormat = CVPixelBufferGetPixelFormatType( pixelBuffer );
            if ( kCVPixelFormatType_32ARGB == sourcePixelFormat )
                bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst;
            else if ( kCVPixelFormatType_32BGRA == sourcePixelFormat )
                bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst;
            else
                return -95014; // only uncompressed pixel formats
    
            sourceRowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
            width = CVPixelBufferGetWidth( pixelBuffer );
            height = CVPixelBufferGetHeight( pixelBuffer );
    
            CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
            sourceBaseAddr = CVPixelBufferGetBaseAddress( pixelBuffer );
    
            colorspace = CGColorSpaceCreateDeviceRGB();
    
            CVPixelBufferRetain( pixelBuffer );
            provider = CGDataProviderCreateWithData( (void *)pixelBuffer, sourceBaseAddr, sourceRowBytes * height, ReleaseCVPixelBuffer);
            image = CGImageCreate(width, height, 8, 32, sourceRowBytes, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);
    
            if ( err && image ) {
                CGImageRelease( image );
                image = NULL;
            }
            if ( provider ) CGDataProviderRelease( provider );
            if ( colorspace ) CGColorSpaceRelease( colorspace );
            *imageOut = image;
            return err;
        }
    
        static void ReleaseCVPixelBuffer(void *pixel, const void *data, size_t size)
        {
            CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)pixel;
            CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
            CVPixelBufferRelease( pixelBuffer );
        }
    
    0 讨论(0)
提交回复
热议问题