Getting the area occupied by a certain color onscreen - iOS

若如初见. 提交于 2019-12-09 14:00:22

问题


I'm trying to do something similar to what is asked in this question, but I don't really understand the answer given to that question and I'm not sure if it is what I need.

What I need is simple, though I'm not so sure it's easy. I want to calculate the number of pixels on the screen that are a certain color. I understand that each 'pixel' that we see is actually a combination of pixels of different colors that appear to be, say, green. So what I need is that actual color- the one that the user sees.

For example, if I created a UIView, set the background color to [UIColor greenColor], and set its dimensions to half of the area of the screen (we can assume that the status bar is hidden for simplicity and that we are on an iPhone), I would expect this 'magic method' to return 240 * 160 or 38,400- half the area of the screen.

I don't expect anyone to write out this 'magic method,' but I'd like to know

a) If this is possible

b) If so, if it be done in almost-realtime

c) If so again, where to start. I've heard it can be done with OpenGL, but I have no experience in that area.

Here is my solution, thanks to Radif Sharafullin:

int pixelsFromImage(UIImage *inImage) {
    CGSize s = inImage.size;
    const int width = s.width;
    const int height = s.height;
    unsigned char* pixelData = malloc(width * height);

    int pixels = 0;

    CGContextRef context = CGBitmapContextCreate(pixelData,
                                                  width,            
                                                  height,            
                                                  8,           
                                                  width,            
                                                  NULL,            
                                                  kCGImageAlphaOnly);

    CGContextClearRect(context, CGRectMake(0, 0, width, height));

    CGContextDrawImage(context, CGRectMake(0, 0, width, height), inImage.CGImage );

    CGContextRelease(context);

    for(int idx = 0; idx < width * height; ++idx) {
        if(pixelData[idx]) {
            ++pixels;
        }
    }

    free(pixelData);

    return pixels;
}

回答1:


it is possible. I've done something similar to calculate the percentage of transparent pixels, but since I needed the rough estimate, I was not looking at each pixel but at every tenth pixel, step variable in the code below.

BOOL isImageErased(UIImage *inImage, float step, int forgivenessCount){
CGSize s = inImage.size;
int width = s.width;  
int height = s.height;   
unsigned char*  pixelData = malloc( width * height );  
int forgivenessCounter=0;


CGContextRef context = CGBitmapContextCreate ( pixelData,  
                                              width,            
                                              height,            
                                              8,           
                                              width,            
                                              NULL,            
                                              kCGImageAlphaOnly );   
CGContextClearRect(context, CGRectMake(0, 0, width, height));
CGContextDrawImage( context, CGRectMake(0, 0, width, height), inImage.CGImage );  
CGContextRelease( context );  
for (int x=0; x<width; x=x+step) {
    for (int y=0; y<height; y=y+step) {

        if(pixelData[y * width + x]) {
            forgivenessCounter++;
            if (forgivenessCounter==forgivenessCount) {
                free(pixelData);
                return FALSE;
            }

        };

    }
}



free( pixelData );

return TRUE;}

I believe this code can be used for your purpose if you pass a preprocessed grayscaled image or modify the kCGImageAlphaOnly setting of the API.

Hope that helps




回答2:


eric.mitchell solution written in Swift 3

func pixelsFromImage(inImage: UIImage) -> Int {

    let width = Int(inImage.size.width)
    let height = Int(inImage.size.height)

    let bitmapBytesPerRow = width
    let bitmapByteCount = bitmapBytesPerRow * height

    let pixelData = UnsafeMutablePointer<UInt8>.allocate(capacity: bitmapByteCount)

    let colorSpace = CGColorSpaceCreateDeviceGray()

    let context = CGContext(data: pixelData,
                            width: width,
                            height: height,
                            bitsPerComponent: 8,
                            bytesPerRow: bitmapBytesPerRow,
                            space: colorSpace,
                            bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.alphaOnly.rawValue).rawValue)!

    let rect = CGRect(x: 0, y: 0, width: width, height: height)
    context.clear(rect)
    context.draw(inImage.cgImage!, in: rect)

    var pixels = 0

    for x in 0...width {
        for y in 0...height {

            if pixelData[y * width + x] > 0 {
                pixels += 1
            }
        }
    }


    free(pixelData)

    return pixels
}


来源:https://stackoverflow.com/questions/11636679/getting-the-area-occupied-by-a-certain-color-onscreen-ios

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!