how to convert a CVImageBufferRef to UIImage

后端 未结 4 1215
别跟我提以往
别跟我提以往 2020-11-28 21:52

I am trying to capture video from a camera. i have gotten the captureOutput:didOutputSampleBuffer: callback to trigger and it gives me a sample buffer that i th

相关标签:
4条回答
  • 2020-11-28 22:13

    You can directly call:

    self.yourImageView.image=[[UIImage alloc] initWithCIImage:[CIImage imageWithCVPixelBuffer:imageBuffer]];
    
    0 讨论(0)
  • 2020-11-28 22:15

    If you simply need to convert a CVImageBufferRef to UIImage, it seems to be much more difficult than it should be. Essentially you need to convert to CIImage, then CGImage, THEN UIImage. I wish I could tell you why. Who knows.

    -(void) screenshotOfVideoStream:(CVImageBufferRef)imageBuffer
    {
        CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
        CIContext *temporaryContext = [CIContext contextWithOptions:nil];
        CGImageRef videoImage = [temporaryContext
                                 createCGImage:ciImage
                                 fromRect:CGRectMake(0, 0,
                                 CVPixelBufferGetWidth(imageBuffer),
                                 CVPixelBufferGetHeight(imageBuffer))];
    
        UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
        [self doSomethingWithOurUIImage:image];
        CGImageRelease(videoImage);
    }
    

    This particular method worked for me when I was converting H.264 video using the VTDecompressionSession callback to get the CVImageBufferRef (but it should work for any CVImageBufferRef). I was using iOS 8.1, XCode 6.2.

    0 讨论(0)
  • 2020-11-28 22:18

    Benjamin Loulier wrote a really good post on outputting a CVImageBufferRef under the consideration of speed with multiple approaches.

    You can also find a working example on github ;)

    How about back in time? ;) Here you go: http://web.archive.org/web/20140426162537/http://www.benjaminloulier.com/posts/ios4-and-direct-access-to-the-camera

    0 讨论(0)
  • 2020-11-28 22:25

    The way that you are passing on the baseAddress presumes that the image data is in the form

    ACCC

    ( where C is some color component, R || G || B ).

    If you've set up your AVCaptureSession to capture the video frames in native format, more than likely you're getting the video data back in planar YUV420 format. (see: link text ) In order to do what you're attempting to do here, probably the easiest thing to do would be specify that you want the video frames captured in kCVPixelFormatType_32RGBA . Apple recommends that you capture the video frames in kCVPixelFormatType_32BGRA if you capture it in non-planar format at all, the reasoning for which is not stated, but I can reasonably assume is due to performance considerations.

    Caveat: I've not done this, and am assuming that accessing the CVPixelBufferRef contents like this is a reasonable way to build the image. I can't vouch for this actually working, but I /can/ tell you that the way you are doing things right now reliably will not work due to the pixel format that you are (probably) capturing the video frames as.

    0 讨论(0)
提交回复
热议问题