NSImage to cv::Mat and vice versa

南楼画角 提交于 2019-12-17 17:59:06

问题


while working with OpenCV I need to convert a NSImage to an OpenCV multi-channel 2D matrix (cvMat) and vice versa.

What's the best way to do it?

Greets,
Dom


回答1:


Here's my outcome, which works pretty well.

NSImage+OpenCV.h:

//
//  NSImage+OpenCV.h
//

#import <AppKit/AppKit.h>

@interface NSImage (NSImage_OpenCV) {

}

+(NSImage*)imageWithCVMat:(const cv::Mat&)cvMat;
-(id)initWithCVMat:(const cv::Mat&)cvMat;

@property(nonatomic, readonly) cv::Mat CVMat;
@property(nonatomic, readonly) cv::Mat CVGrayscaleMat;

@end

NSImage+OpenCV.mm:

//
//  NSImage+OpenCV.mm
//

#import "NSImage+OpenCV.h"

static void ProviderReleaseDataNOP(void *info, const void *data, size_t size)
{
    return;
}


@implementation NSImage (NSImage_OpenCV)

-(CGImageRef)CGImage
{
    CGContextRef bitmapCtx = CGBitmapContextCreate(NULL/*data - pass NULL to let CG allocate the memory*/, 
                                                   [self size].width,  
                                                   [self size].height, 
                                                   8 /*bitsPerComponent*/, 
                                                   0 /*bytesPerRow - CG will calculate it for you if it's allocating the data.  This might get padded out a bit for better alignment*/, 
                                                   [[NSColorSpace genericRGBColorSpace] CGColorSpace], 
                                                   kCGBitmapByteOrder32Host|kCGImageAlphaPremultipliedFirst);

    [NSGraphicsContext saveGraphicsState];
    [NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithGraphicsPort:bitmapCtx flipped:NO]];
    [self drawInRect:NSMakeRect(0,0, [self size].width, [self size].height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
    [NSGraphicsContext restoreGraphicsState];

    CGImageRef cgImage = CGBitmapContextCreateImage(bitmapCtx);
    CGContextRelease(bitmapCtx);

    return cgImage;
}


-(cv::Mat)CVMat
{
    CGImageRef imageRef = [self CGImage];
    CGColorSpaceRef colorSpace = CGImageGetColorSpace(imageRef);
    CGFloat cols = self.size.width;
    CGFloat rows = self.size.height;
    cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels

    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,                 // Pointer to backing data
                                                    cols,                      // Width of bitmap
                                                    rows,                     // Height of bitmap
                                                    8,                          // Bits per component
                                                    cvMat.step[0],              // Bytes per row
                                                    colorSpace,                 // Colorspace
                                                    kCGImageAlphaNoneSkipLast |
                                                    kCGBitmapByteOrderDefault); // Bitmap info flags

    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), imageRef);
    CGContextRelease(contextRef);
    CGImageRelease(imageRef);
    return cvMat;
}

-(cv::Mat)CVGrayscaleMat
{
    CGImageRef imageRef = [self CGImage];
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
    CGFloat cols = self.size.width;
    CGFloat rows = self.size.height;
    cv::Mat cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channel
    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,                 // Pointer to backing data
                                                    cols,                      // Width of bitmap
                                                    rows,                     // Height of bitmap
                                                    8,                          // Bits per component
                                                    cvMat.step[0],              // Bytes per row
                                                    colorSpace,                 // Colorspace
                                                    kCGImageAlphaNone |
                                                    kCGBitmapByteOrderDefault); // Bitmap info flags

    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), imageRef);
    CGContextRelease(contextRef);
    CGColorSpaceRelease(colorSpace);
    CGImageRelease(imageRef);
    return cvMat;
}

+ (NSImage *)imageWithCVMat:(const cv::Mat&)cvMat
{
    return [[[NSImage alloc] initWithCVMat:cvMat] autorelease];
}

- (id)initWithCVMat:(const cv::Mat&)cvMat
{
    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];

    CGColorSpaceRef colorSpace;

    if (cvMat.elemSize() == 1)
    {
        colorSpace = CGColorSpaceCreateDeviceGray();
    }
    else
    {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }

    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

    CGImageRef imageRef = CGImageCreate(cvMat.cols,                                     // Width
                                        cvMat.rows,                                     // Height
                                        8,                                              // Bits per component
                                        8 * cvMat.elemSize(),                           // Bits per pixel
                                        cvMat.step[0],                                  // Bytes per row
                                        colorSpace,                                     // Colorspace
                                        kCGImageAlphaNone | kCGBitmapByteOrderDefault,  // Bitmap info flags
                                        provider,                                       // CGDataProviderRef
                                        NULL,                                           // Decode
                                        false,                                          // Should interpolate
                                        kCGRenderingIntentDefault);                     // Intent   


    NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCGImage:imageRef];
    NSImage *image = [[NSImage alloc] init];
    [image addRepresentation:bitmapRep];

    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpace);

    return image;
}

@end

Example usage:

Just import it like this:

#import "NSImage+OpenCV.h"

And use it like this:

cv::Mat cvMat_test;
NSImage *image = [NSImage imageNamed:@"test.jpg"];
cvMat_test = [image CVMat];
[myImageView setImage:[NSImage imageWithCVMat:cvMat_test]];



回答2:


In -(id)initWithCVMat:(const cv::Mat&)cvMat, shouldn't you be adding the representation to self, rather than a new NSImage?

-(id)initWithCVMat:(const cv::Mat *)iMat
{
    if(self = [super init]) {
        NSData *tData = [NSData dataWithBytes:iMat->data length:iMat->elemSize() * iMat->total()];

        CGColorSpaceRef tColorSpace;

        if(iMat->elemSize() == 1) {
            tColorSpace = CGColorSpaceCreateDeviceGray();
        } else {
            tColorSpace = CGColorSpaceCreateDeviceRGB();
        }

        CGDataProviderRef tProvider = CGDataProviderCreateWithCFData((CFDataRef) tData);

        CGImageRef tImage = CGImageCreate(
            iMat->cols,
            iMat->rows,
            8,
            8 * iMat->elemSize(),
            iMat->step[0],
            tColorSpace,
            kCGImageAlphaNone | kCGBitmapByteOrderDefault,
            tProvider,
            NULL,
            false,
            kCGRenderingIntentDefault);

        NSBitmapImageRep *tBitmap = [[NSBitmapImageRep alloc] initWithCGImage:tImage];
        [self addRepresentation:tBitmap];
        [tBitmap release];

        CGImageRelease(tImage);
        CGDataProviderRelease(tProvider);
        CGColorSpaceRelease(tColorSpace);
    }
    return self;
}


来源:https://stackoverflow.com/questions/8563356/nsimage-to-cvmat-and-vice-versa

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!