iOS Accelerate Framework vImage - Performance improvement?

前端 未结 3 1124
忘了有多久
忘了有多久 2021-02-10 16:05

I\'ve been working with OpenCV and Apple\'s Accelerate framework and find the performance of Accelerate to be slow and Apple\'s documentation limited. Let\'s take for example:

3条回答
  •  难免孤独
    2021-02-10 16:35

    To get 30 frames per second using the equalizeHistogram function, you must deinterleave the image (convert from ARGBxxxx to PlanarX) and equalize ONLY R(ed)G(reen)B(lue); if you equalize A(lpha), the frame rate will drop to at least 24.

    Here is the code that does exactly what you want, as fast as you want:

    - (CVPixelBufferRef)copyRenderedPixelBuffer:(CVPixelBufferRef)pixelBuffer {
    
    CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
    
    unsigned char *base = (unsigned char *)CVPixelBufferGetBaseAddress( pixelBuffer );
    size_t width = CVPixelBufferGetWidth( pixelBuffer );
    size_t height = CVPixelBufferGetHeight( pixelBuffer );
    size_t stride = CVPixelBufferGetBytesPerRow( pixelBuffer );
    
    vImage_Buffer _img = {
        .data = base,
        .height = height,
        .width = width,
        .rowBytes = stride
    };
    
    vImage_Error err;
    vImage_Buffer _dstA, _dstR, _dstG, _dstB;
    
    err = vImageBuffer_Init( &_dstA, height, width, 8 * sizeof( uint8_t ), kvImageNoFlags);
    if (err != kvImageNoError)
        NSLog(@"vImageBuffer_Init (alpha) error: %ld", err);
    
    err = vImageBuffer_Init( &_dstR, height, width, 8 * sizeof( uint8_t ), kvImageNoFlags);
    if (err != kvImageNoError)
        NSLog(@"vImageBuffer_Init (red) error: %ld", err);
    
    err = vImageBuffer_Init( &_dstG, height, width, 8 * sizeof( uint8_t ), kvImageNoFlags);
    if (err != kvImageNoError)
        NSLog(@"vImageBuffer_Init (green) error: %ld", err);
    
    err = vImageBuffer_Init( &_dstB, height, width, 8 * sizeof( uint8_t ), kvImageNoFlags);
    if (err != kvImageNoError)
        NSLog(@"vImageBuffer_Init (blue) error: %ld", err);
    
    err = vImageConvert_ARGB8888toPlanar8(&_img, &_dstA, &_dstR, &_dstG, &_dstB, kvImageNoFlags);
    if (err != kvImageNoError)
        NSLog(@"vImageConvert_ARGB8888toPlanar8 error: %ld", err);
    
    err = vImageEqualization_Planar8(&_dstR, &_dstR, kvImageNoFlags);
    if (err != kvImageNoError)
        NSLog(@"vImageEqualization_Planar8 (red) error: %ld", err);
    
    err = vImageEqualization_Planar8(&_dstG, &_dstG, kvImageNoFlags);
    if (err != kvImageNoError)
        NSLog(@"vImageEqualization_Planar8 (green) error: %ld", err);
    
    err = vImageEqualization_Planar8(&_dstB, &_dstB, kvImageNoFlags);
    if (err != kvImageNoError)
        NSLog(@"vImageEqualization_Planar8 (blue) error: %ld", err);
    
    err = vImageConvert_Planar8toARGB8888(&_dstA, &_dstR, &_dstG, &_dstB, &_img, kvImageNoFlags);
    if (err != kvImageNoError)
        NSLog(@"vImageConvert_Planar8toARGB8888 error: %ld", err);
    
    err = vImageContrastStretch_ARGB8888( &_img, &_img, kvImageNoError );
    if (err != kvImageNoError)
        NSLog(@"vImageContrastStretch_ARGB8888 error: %ld", err);
    
    free(_dstA.data);
    free(_dstR.data);
    free(_dstG.data);
    free(_dstB.data);
    
    CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
    
    return (CVPixelBufferRef)CFRetain( pixelBuffer );
    

    }

    Notice that I allocate the alpha channel, even though I perform nothing on it; that's simply because converting back and forth between ARGB8888 and Planar8 requires alpha-channel buffer allocation and reference. Same performance and quality enhancements, regardless.

    Also note that I perform contrast stretching after converting the Planar8 buffers into a single ARGB8888 buffer; that's because it's faster than applying the function channel-by-channel, as I did with the histogram equalization function, and gets the same results as doing it individually (the contrast stretching function does not cause the same alpha-channel distortion as histogram equalization).

提交回复
热议问题