How to directly rotate CVImageBuffer image in IOS 4 without converting to UIImage?

旧巷老猫 提交于 2019-12-17 10:29:26

问题


I am using OpenCV 2.2 on the iPhone to detect faces. I'm using the IOS 4's AVCaptureSession to get access to the camera stream, as seen in the code that follows.

My challenge is that the video frames come in as CVBufferRef (pointers to CVImageBuffer) objects, and they come in oriented as a landscape, 480px wide by 300px high. This is fine if you are holding the phone sideways, but when the phone is held in the upright position I want to rotate these frames 90 degrees clockwise so that OpenCV can find the faces correctly.

I could convert the CVBufferRef to a CGImage, then to a UIImage, and then rotate, as this person is doing: Rotate CGImage taken from video frame

However that wastes a lot of CPU. I'm looking for a faster way to rotate the images coming in, ideally using the GPU to do this processing if possible.

Any ideas?

Ian

Code Sample:

 -(void) startCameraCapture {
  // Start up the face detector

  faceDetector = [[FaceDetector alloc] initWithCascade:@"haarcascade_frontalface_alt2" withFileExtension:@"xml"];

  // Create the AVCapture Session
  session = [[AVCaptureSession alloc] init];

  // create a preview layer to show the output from the camera
  AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
  previewLayer.frame = previewView.frame;
  previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

  [previewView.layer addSublayer:previewLayer];

  // Get the default camera device
  AVCaptureDevice* camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

  // Create a AVCaptureInput with the camera device
  NSError *error=nil;
  AVCaptureInput* cameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:camera error:&error];
  if (cameraInput == nil) {
   NSLog(@"Error to create camera capture:%@",error);
  }

  // Set the output
  AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init];
  videoOutput.alwaysDiscardsLateVideoFrames = YES;

  // create a queue besides the main thread queue to run the capture on
  dispatch_queue_t captureQueue = dispatch_queue_create("catpureQueue", NULL);

  // setup our delegate
  [videoOutput setSampleBufferDelegate:self queue:captureQueue];

  // release the queue.  I still don't entirely understand why we're releasing it here,
  // but the code examples I've found indicate this is the right thing.  Hmm...
  dispatch_release(captureQueue);

  // configure the pixel format
  videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
          [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], 
          (id)kCVPixelBufferPixelFormatTypeKey,
          nil];

  // and the size of the frames we want
  // try AVCaptureSessionPresetLow if this is too slow...
  [session setSessionPreset:AVCaptureSessionPresetMedium];

  // If you wish to cap the frame rate to a known value, such as 10 fps, set 
  // minFrameDuration.
  videoOutput.minFrameDuration = CMTimeMake(1, 10);

  // Add the input and output
  [session addInput:cameraInput];
  [session addOutput:videoOutput];

  // Start the session
  [session startRunning];  
 }

 - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
  // only run if we're not already processing an image
  if (!faceDetector.imageNeedsProcessing) {

   // Get CVImage from sample buffer
   CVImageBufferRef cvImage = CMSampleBufferGetImageBuffer(sampleBuffer);

   // Send the CVImage to the FaceDetector for later processing
   [faceDetector setImageFromCVPixelBufferRef:cvImage];

   // Trigger the image processing on the main thread
   [self performSelectorOnMainThread:@selector(processImage) withObject:nil waitUntilDone:NO];
  }
 }

回答1:


vImage is a pretty fast way to do it. Requires ios5 though. The call says ARGB but it works for the BGRA you get from the buffer.

This also has the advantage that you can cut out a part of the buffer and rotate that. See my answer here

- (unsigned char*) rotateBuffer: (CMSampleBufferRef) sampleBuffer
{
 CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
 CVPixelBufferLockBaseAddress(imageBuffer,0);

 size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
 size_t width = CVPixelBufferGetWidth(imageBuffer);
 size_t height = CVPixelBufferGetHeight(imageBuffer);
 size_t currSize = bytesPerRow*height*sizeof(unsigned char); 
 size_t bytesPerRowOut = 4*height*sizeof(unsigned char); 

 void *srcBuff = CVPixelBufferGetBaseAddress(imageBuffer); 
 unsigned char *outBuff = (unsigned char*)malloc(currSize);  

 vImage_Buffer ibuff = { srcBuff, height, width, bytesPerRow};
 vImage_Buffer ubuff = { outBuff, width, height, bytesPerRowOut};

 uint8_t rotConst = 1;   // 0, 1, 2, 3 is equal to 0, 90, 180, 270 degrees rotation

 vImage_Error err= vImageRotate90_ARGB8888 (&ibuff, &ubuff, NULL, rotConst, NULL,0);
 if (err != kvImageNoError) NSLog(@"%ld", err);

 return outBuff;
}



回答2:


If you rotate at 90 degree stops then you can just do it in memory. Here is example code that just simply copies the data to a new pixel buffer. Doing a brute force rotation should be straight forward.

- (CVPixelBufferRef) rotateBuffer: (CMSampleBufferRef) sampleBuffer
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer,0);

    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);

    void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];

    CVPixelBufferRef pxbuffer = NULL;
    //CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, _pixelWriter.pixelBufferPool, &pxbuffer);
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width,
                                          height, kCVPixelFormatType_32BGRA, (CFDictionaryRef) options, 
                                          &pxbuffer);

    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *dest_buff = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(dest_buff != NULL);

    int *src = (int*) src_buff ;
    int *dest= (int*) dest_buff ;
    size_t count = (bytesPerRow * height) / 4 ;
    while (count--) {
        *dest++ = *src++;
    }

    //Test straight copy.
    //memcpy(pxdata, baseAddress, width * height * 4) ;
    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
    return pxbuffer;
}

You can then use AVAssetWriterInputPixelBufferAdaptor if you are writing this back out to an AVAssetWriterInput.

The above is not optimized. You may want to look for a more efficient copy algorithm. A good place to start is with In-place Matrix Transpose. You would also want to use a pixel buffer pool rather then create a new one each time.

Edit. You could use the GPU to do this. This sounds like a lot of data being pushed around. In CVPixelBufferRef there is the key kCVPixelBufferOpenGLCompatibilityKey. I assume you could create a OpenGL compatible image from the CVImageBufferRef (which is just a pixel buffer ref), and push it through a shader. Again, overkill IMO. You may see if BLAS or LAPACK has 'out of place' transpose methods. If they do then you can be assured they are highly optimized.

90 CW where new_width = width ... This will get you a portrait oriented image.

for (int i = 1; i <= new_height; i++) {
    for (int j = new_width - 1; j > -1; j--) {
        *dest++ = *(src + (j * width) + i) ;
    }
}



回答3:


Maybe easier to just set the video orientation the way you want:

connection.videoOrientation = AVCaptureVideoOrientationPortrait

This way you don't need to do that rotation gimmick at all




回答4:


I know this is quite old question, but I've been solving similar problem recently and maybe someone can find my solution useful.

I needed to extract raw image data from image buffer of YCbCr format delivered by iPhone camera (got from [AVCaptureVideoDataOutput.availableVideoCVPixelFormatTypes firstObject]), dropping information such as headers, meta information etc to pass it to further processing.

Also, I needed to extract only small area in the center of captured video frame, so some cropping was needed.

My conditions allowed capturing video only in either landscape orientation, but when a device is positioned in landscape left orientation, image is delivered turned upside down, so I needed to flip it in both axis. In case the image is flipped, my idea was to copy data from the source image buffer in reverse order and reverse bytes in each row of read data to flip image in both axis. That idea really works, and as I needed to copy data from source buffer anyway, it seems there's not much performance penalty if reading from the start or the end (Of course, bigger image = longer processing, but I deal with really small numbers).

I'd like to know what others think about this solution and of course some hints how to improve the code:

/// Lock pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);

/// Address where image buffer starts
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);

/// Read image parameters
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);

/// See whether image is flipped upside down
BOOL isFlipped = (_previewLayer.connection.videoOrientation == AVCaptureVideoOrientationLandscapeLeft);

/// Calculate cropping frame. Crop to scanAreaSize (defined as CGSize constant elsewhere) from the center of an image
CGRect cropFrame = CGRectZero;
cropFrame.size = scanAreaSize;
cropFrame.origin.x = (width / 2.0f) - (scanAreaSize.width / 2.0f);
cropFrame.origin.y = (height / 2.0f) - (scanAreaSize.height / 2.0f);

/// Update proportions to cropped size
width = (size_t)cropFrame.size.width;
height = (size_t)cropFrame.size.height;

/// Allocate memory for output image data. W*H for Y component, W*H/2 for CbCr component
size_t bytes = width * height + (width * height / 2);

uint8_t *outputDataBaseAddress = (uint8_t *)malloc(bytes);

if(outputDataBaseAddress == NULL) {

    /// Memory allocation failed, unlock buffer and give up
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

    return NULL;
}

/// Get parameters of YCbCr pixel format
CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;

NSUInteger bytesPerRowY = EndianU32_BtoN(bufferInfo->componentInfoY.rowBytes);
NSUInteger offsetY = EndianU32_BtoN(bufferInfo->componentInfoY.offset);

NSUInteger bytesPerRowCbCr = EndianU32_BtoN(bufferInfo->componentInfoCbCr.rowBytes);
NSUInteger offsetCbCr = EndianU32_BtoN(bufferInfo->componentInfoCbCr.offset);

/// Copy image data only, skipping headers and metadata. Create single buffer which will contain Y component data
/// followed by CbCr component data.

/// Process Y component
/// Pointer to the source buffer
uint8_t *src;

/// Pointer to the destination buffer
uint8_t *destAddress;

/// Calculate crop rect offset. Crop offset is number of rows (y * bytesPerRow) + x offset.
/// If image is flipped, then read buffer from the end to flip image vertically. End address is height-1!
int flipOffset = (isFlipped) ? (int)((height - 1) * bytesPerRowY) : 0;

int cropOffset = (int)((cropFrame.origin.y * bytesPerRowY) + flipOffset + cropFrame.origin.x);

/// Set source pointer to Y component buffer start address plus crop rect offset
src = baseAddress + offsetY + cropOffset;

for(int y = 0; y < height; y++) {

    /// Copy one row of pixel data from source into the output buffer.
    destAddress = (outputDataBaseAddress + y * width);

    memcpy(destAddress, src, width);

    if(isFlipped) {

        /// Reverse bytes in row to flip image horizontally
        [self reverseBytes:destAddress bytesSize:(int)width];

        /// Move one row up
        src -= bytesPerRowY;
    }
    else {

        /// Move to the next row
        src += bytesPerRowY;
    }
}

/// Calculate crop offset for CbCr component
flipOffset = (isFlipped) ? (int)(((height - 1) / 2) * bytesPerRowCbCr) : 0;
cropOffset = (int)((cropFrame.origin.y * bytesPerRowCbCr) + flipOffset + cropFrame.origin.x);

/// Set source pointer to the CbCr component offset + crop offset
src = (baseAddress + offsetCbCr + cropOffset);

for(int y = 0; y < (height / 2); y++) {

    /// Copy one row of pixel data from source into the output buffer.
    destAddress = (outputDataBaseAddress + (width * height) + y * width);

    memcpy(destAddress, src, width);

    if(isFlipped) {

        /// Reverse bytes in row to flip image horizontally
        [self reverseBytes:destAddress bytesSize:(int)width];

        /// Move one row up
        src -= bytesPerRowCbCr;
    }
    else {

        src += bytesPerRowCbCr;
    }
}

/// Unlock pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

/// Continue with image data in outputDataBaseAddress;


来源:https://stackoverflow.com/questions/4528879/how-to-directly-rotate-cvimagebuffer-image-in-ios-4-without-converting-to-uiimag

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!