Get pixels and colours from NSImage

后端 未结 6 1240
南方客
南方客 2021-02-06 03:10

I have created an NSImage object, and ideally would like to determine how many of each pixels colour it contains. Is this possible?

相关标签:
6条回答
  • 2021-02-06 03:40

    Get an NSBitmapImageRep from your NSImage. Then you can get access to the pixels.

    NSImage* img = ...;
    NSBitmapImageRep* raw_img = [NSBitmapImageRep imageRepWithData:[img TIFFRepresentation]];
    NSColor* color = [raw_img colorAtX:0 y:0];
    
    0 讨论(0)
  • 2021-02-06 03:44

    I suggest creating your own bitmap context, wrapping it in a graphics context and setting that as the current context, telling the image to draw itself, and then accessing the pixel data behind the bitmap context directly.

    This will be more code, but will save you both a trip through a TIFF representation and the creation of thousands or millions of NSColor objects. If you're working with images of any appreciable size, these expenses will add up quickly.

    0 讨论(0)
  • 2021-02-06 03:48

    This maybe a more streamlined approach for some and reduce complexity of dropping into memory management.

    https://github.com/koher/EasyImagy

    Code sample https://github.com/koher/EasyImagyCameraSample

    import EasyImagy
    
    let image = Image<RGBA<UInt8>>(nsImage: "test.png") // N.B. init with nsImage 
    
    print(image[x, y])
    image[x, y] = RGBA(red: 255, green: 0, blue: 0, alpha: 127)
    image[x, y] = RGBA(0xFF00007F) // red: 255, green: 0, blue: 0, alpha: 127
    
    // Iterates over all pixels
    for pixel in image {
        // ...
    }
    
    
    
    //// Gets a pixel by subscripts Gets a pixel by  
    let pixel = image[x, y]
    // Sets a pixel by subscripts
    image[x, y] = RGBA(0xFF0000FF)
    image[x, y].alpha = 127
    // Safe get for a pixel
    if let pixel = image.pixelAt(x: x, y: y) {
        print(pixel.red)
        print(pixel.green)
        print(pixel.blue)
        print(pixel.alpha)
    
        print(pixel.gray) // (red + green + blue) / 3
        print(pixel) // formatted like "#FF0000FF"
    } else {
        // `pixel` is safe: `nil` is returned when out of bounds
        print("Out of bounds")
    }
    
    0 讨论(0)
  • 2021-02-06 03:50

    This code renders the NSImage into a CGBitmapContext:

    - (void)updateImageData {
    
        if (!_image)
            return;
    
        // Dimensions - source image determines context size
    
        NSSize imageSize = _image.size;
        NSRect imageRect = NSMakeRect(0, 0, imageSize.width, imageSize.height);
    
        // Create a context to hold the image data
    
        CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
    
        CGContextRef ctx = CGBitmapContextCreate(NULL,
                                                 imageSize.width,
                                                 imageSize.height,
                                                 8,
                                                 0,
                                                 colorSpace,
                                                 kCGImageAlphaPremultipliedLast);
    
        // Wrap graphics context
    
        NSGraphicsContext* gctx = [NSGraphicsContext graphicsContextWithCGContext:ctx flipped:NO];
    
        // Make our bitmap context current and render the NSImage into it
    
        [NSGraphicsContext setCurrentContext:gctx];
        [_image drawInRect:imageRect];
    
        // Calculate the histogram
    
        [self computeHistogramFromBitmap:ctx];
    
        // Clean up
    
        [NSGraphicsContext setCurrentContext:nil];
        CGContextRelease(ctx);
        CGColorSpaceRelease(colorSpace);
    }
    

    Given a bitmap context, we can access the raw image data directly, and compute the histograms for each colour channel:

    - (void)computeHistogramFromBitmap:(CGContextRef)bitmap {
    
        // NB: Assumes RGBA 8bpp
    
        size_t width = CGBitmapContextGetWidth(bitmap);
        size_t height = CGBitmapContextGetHeight(bitmap);
    
        uint32_t* pixel = (uint32_t*)CGBitmapContextGetData(bitmap);
    
        for (unsigned y = 0; y < height; y++)
        {
            for (unsigned x = 0; x < width; x++)
            {
                uint32_t rgba = *pixel;
    
                // Extract colour components
                uint8_t red   = (rgba & 0x000000ff) >> 0;
                uint8_t green = (rgba & 0x0000ff00) >> 8;
                uint8_t blue  = (rgba & 0x00ff0000) >> 16;
    
                // Accumulate each colour
                _histogram[kRedChannel][red]++;
                _histogram[kGreenChannel][green]++;
                _histogram[kBlueChannel][blue]++;
    
                // Next pixel!
                pixel++;
            }
        }
    }
    
    @end
    

    I've published a complete project, a Cocoa sample app, which includes the above.

    • https://github.com/gavinb/CocoaImageHistogram.git
    0 讨论(0)
  • 2021-02-06 03:58

    Look for "histogram" in the Core Image documentation.

    0 讨论(0)
  • 2021-02-06 04:01

    Using colorAtX with NSBitmapImageRep does not always lead to the exact correct color.

    I managed to get the correct color with this simple code:

    [yourImage lockFocus]; // yourImage is just your NSImage variable
    NSColor *pixelColor = NSReadPixel(NSMakePoint(1, 1)); // Or another point
    [yourImage unlockFocus];
    
    0 讨论(0)
提交回复
热议问题