Detect black pixel in image iOS

和自甴很熟 提交于 2019-12-18 03:46:48

问题


As of now I am searching every pixel 1 by 1 checking the color and seeing if it's black... if it isn't I move on to the next pixel. This is taking forever as I can only check approx. 100 pixels per second (speeding up my NSTimer freezes the app because it can't check fast enough.) So is there anyway I can just have Xcode return all the pixels that are black and ignore everything else so I only have to check those pixels and not every pixel. I am trying to detect a black pixel furthest to the left on my image.

Here is my current code.

- (void)viewDidLoad {
    timer = [NSTimer scheduledTimerWithTimeInterval: 0.01
                                             target: self
                                           selector:@selector(onTick:)
                                           userInfo: nil repeats:YES];
    y1 = 0;
    x1 = 0;
    initialImage = 0;
    height1 = 0;
    width1 = 0;
}

-(void)onTick:(NSTimer *)timer {
    if (initialImage != 1) {
        /*
        IMAGE INITIALLY GETS SET HERE... "image2.image = [blah blah blah];" took this out for non disclosure reasons
        */
        initialImage = 1;
    }
    //image2 is the image I'm checking the pixels of.
    width1 = (int)image2.size.width;
    height1 = (int)image2.size.height;
    CFDataRef imageData = CGDataProviderCopyData(CGImageGetDataProvider(image2.CGImage));
    const UInt32 *pixels = (const UInt32*)CFDataGetBytePtr(imageData);
    if ( (pixels[(x1+(y1*width1))]) == 0x000000) { //0x000000 is black right?
        NSLog(@"black!");
        NSLog(@"x = %i", x1);
        NSLog(@"y = %i", y1);
    }else {
        NSLog(@"val: %lu", (pixels[(x1+(y1*width1))]));
        NSLog(@"x = %i", x1);
        NSLog(@"y = %i", y1);
        x1 ++;
        if (x1 >= width1) {
            y1 ++;
            x1 = 0;
        }
    }
    if (y1 > height1) {
        /*
        MY UPDATE IMAGE CODE GOES HERE (IMAGE CHANGES EVERY TIME ALL PIXELS HAVE BEEN CHECKED
        */
        y1 = 0;
        x1 = 0;
    }

Also what if a pixel is really close to black but not perfectly black... Can I add a margin of error in there somewhere so it will still detect pixels that are like 95% black? Thanks!


回答1:


Why are you using a timer at all? Why not just have a double for loop in your function that loops over all possible x- and y-coordinates in the image? Surely that would be waaaay faster than only checking at most 100 pixels per second. You would want to have the x (width) coordinates in the outer loop and the y (height) coordinates in the inner loop so that you are effectively scanning one column of pixels at a time from left to right, since you are trying to find the leftmost black pixel.

Also, are you sure that each pixel in your image has a 4-byte (Uint32) representation? A standard bitmap would have 3 bytes per pixel. To check if a pixel is close to black, you would just examine each byte in the pixel separately and make sure they are all less than some threshold.

EDIT: OK, since you are using UIGetScreenImage, I'm going to assume that it is 4-bytes per pixel.

const UInt8 *pixels = CFDataGetBytePtr(imageData);
UInt8 blackThreshold = 10; // or some value close to 0
int bytesPerPixel = 4;
for(int x = 0; x < width1; x++) {
  for(int y = 0; y < height1; y++) {
    int pixelStartIndex = (x + (y * width1)) * bytesPerPixel;
    UInt8 alphaVal = pixels[pixelStartIndex]; // can probably ignore this value
    UInt8 redVal = pixels[pixelStartIndex + 1];
    UInt8 greenVal = pixels[pixelStartIndex + 2];
    UInt8 blueVal = pixels[pixelStartIndex + 3];
    if(redVal < blackThreshold && blueVal < blackThreshold && greenVal < blackThreshold) {
      //This pixel is close to black...do something with it
    }
  }
}

If it turns out that bytesPerPixel is 3, then change that value accordingly, remove the alphaVal from the for loop, and subtract 1 from the indices of the red, green, and blue values.

Also, my current understanding is that UIGetScreenImage is considered a private function that Apple may or may not reject you for using.




回答2:


I'm not an expert on pixel-level image processing, but my first thought is: why are you using a timer to do this? That incurs lots of overhead and makes the code less clear to read. (I think it also renders it thread-unsafe.) The overhead is not just from the timer itself but because you are doing all the data setup each time through.

How about using a loop instead to iterate over the pixels?

Also, you are leaking imageData (since you create it with a "Copy" method and never release it). Currently you are doing this once per timer fire (and your imageData is probably pretty big if you are working on all but the tiniest images), so you are probably leaking tons of memory.




回答3:


There is no way you should be doing this with a timer (or no reason I can think of anyway!).

How big are your images? It should be viable to process the entire image in a single loop reasonably quickly.



来源:https://stackoverflow.com/questions/8955110/detect-black-pixel-in-image-ios

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!