How to convert from YUV to CIImage for iOS

后端 未结 2 1905
小蘑菇
小蘑菇 2020-12-30 17:47

I am trying to convert a YUV image to CIIMage and ultimately UIImage. I am fairly novice at these and trying to figure out an easy way to do it. From what I have learnt, fro

相关标签:
2条回答
  • 2020-12-30 18:03

    I have also faced with this similar problem. I was trying to Display YUV(NV12) formatted data to the screen. This solution is working in my project...

    //YUV(NV12)-->CIImage--->UIImage Conversion
    NSDictionary *pixelAttributes = @{kCVPixelBufferIOSurfacePropertiesKey : @{}};
    CVPixelBufferRef pixelBuffer = NULL;
    
    CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                          640,
                                          480,
                                          kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                          (__bridge CFDictionaryRef)(pixelAttributes),
                                          &pixelBuffer);
    
    CVPixelBufferLockBaseAddress(pixelBuffer,0);
    unsigned char *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    
    // Here y_ch0 is Y-Plane of YUV(NV12) data.
    memcpy(yDestPlane, y_ch0, 640 * 480); 
    unsigned char *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
    
    // Here y_ch1 is UV-Plane of YUV(NV12) data. 
    memcpy(uvDestPlane, y_ch1, 640*480/2);
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    
    if (result != kCVReturnSuccess) {
        NSLog(@"Unable to create cvpixelbuffer %d", result);
    }
    
    // CIImage Conversion    
    CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    
    CIContext *MytemporaryContext = [CIContext contextWithOptions:nil];
    CGImageRef MyvideoImage = [MytemporaryContext createCGImage:coreImage
                                                        fromRect:CGRectMake(0, 0, 640, 480)];
    
    // UIImage Conversion
    UIImage *Mynnnimage = [[UIImage alloc] initWithCGImage:MyvideoImage 
                                                     scale:1.0 
                                               orientation:UIImageOrientationRight];
    
    CVPixelBufferRelease(pixelBuffer);
    CGImageRelease(MyvideoImage);
    

    Here I am showing data structure of YUV(NV12) data and how we can get the Y-Plane(y_ch0) and UV-Plane(y_ch1) which is used to create CVPixelBufferRef. Let's look at the YUV(NV12) data structure.. If we look at the picture we can get following information about YUV(NV12),

    • Total Frame Size = Width * Height * 3/2,
    • Y-Plane Size = Frame Size * 2/3,
    • UV-Plane size = Frame Size * 1/3,
    • Data stored in Y-Plane -->{Y1, Y2, Y3, Y4, Y5.....}.
    • U-Plane-->(U1, V1, U2, V2, U3, V3,......}.

    I hope it will be helpful to all. :) Have fun with IOS Development :D

    0 讨论(0)
  • 2020-12-30 18:14

    If you have a video frame object that looks like this:

    int width, 
    int height, 
    unsigned long long time_stamp,
    unsigned char *yData, 
    unsigned char *uData, 
    unsigned char *vData,
    int yStride 
    int uStride 
    int vStride
    

    You can use the following to fill up a pixelBuffer:

    NSDictionary *pixelAttributes = @{(NSString *)kCVPixelBufferIOSurfacePropertiesKey:@{}};
    CVPixelBufferRef pixelBuffer = NULL;
    CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                            width,
                                            height,
                                            kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,   //  NV12
                                            (__bridge CFDictionaryRef)(pixelAttributes),
                                            &pixelBuffer);
    if (result != kCVReturnSuccess) {
        NSLog(@"Unable to create cvpixelbuffer %d", result);
    }
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    unsigned char *yDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    for (int i = 0, k = 0; i < height; i ++) {
        for (int j = 0; j < width; j ++) {
            yDestPlane[k++] = yData[j + i * yStride]; 
        }
    }
    unsigned char *uvDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
    for (int i = 0, k = 0; i < height / 2; i ++) {
        for (int j = 0; j < width / 2; j ++) {
            uvDestPlane[k++] = uData[j + i * uStride]; 
            uvDestPlane[k++] = vData[j + i * vStride]; 
        }
    }
    

    Now you can convert it to CIImage:

    CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    CIContext *tempContext = [CIContext contextWithOptions:nil];
    CGImageRef coreImageRef = [tempContext createCGImage:coreImage
                                            fromRect:CGRectMake(0, 0, width, height)];
    

    And UIImage if you need that. (image orientation can vary depending on your input)

    UIImage *myUIImage = [[UIImage alloc] initWithCGImage:coreImageRef
                                        scale:1.0
                                        orientation:UIImageOrientationUp];
    

    Don't forget to release the variables:

    CVPixelBufferRelease(pixelBuffer);
    CGImageRelease(coreImageRef);
    
    0 讨论(0)
提交回复
热议问题