iOS how to calculate number of pixels/area enclosed by a curve?

↘锁芯ラ 提交于 2019-12-20 12:46:15

问题


I got an arbitrary shaped curve, enclosing some area. I would like to approximate the number of pixels that the curve is enclosing on an iPhone/iPad screen. How can I do so?

  • A curve is defined as a successive x/y coordinates of points.
  • A curve is closed.
  • A curve is drawn by a user's touches (touchesMoved method), and I have no knowledge of what it looks like

I was thinking of somehow filling the closed curve with color, then calculating the number of pixels of this color in a screenshot of a screen. This means I need to know how to programmatically fill a closed curve with color.

Is there some other way that I'm not thinking of?

Thank you!


回答1:


Let's do this by creating a Quartz path enclosing your curve. Then we'll create a bitmap context and fill the path in that context. Then we can examine the bitmap and count the pixels that were filled. We'll wrap this all in a convenient function:

static double areaOfCurveWithPoints(const CGPoint *points, size_t count) {

First we need to create the path:

    CGPathRef path = createClosedPathWithPoints(points, count);

Then we need to get the bounding box of the path. CGPoint coordinates don't have to be integers, but a bitmap has to have integer dimensions, so we'll get an integral bounding box at least as big as the path's bounding box:

    CGRect frame = integralFrameForPath(path);

We also need to decide how wide (in bytes) to make the bitmap:

    size_t bytesPerRow = bytesPerRowForWidth(frame.size.width);

Now we can create the bitmap:

    CGContextRef gc = createBitmapContextWithFrame(frame, bytesPerRow);

The bitmap is filled with black when it's created. We'll fill the path with white:

    CGContextSetFillColorWithColor(gc, [UIColor whiteColor].CGColor);
    CGContextAddPath(gc, path);
    CGContextFillPath(gc);

Now we're done with the path so we can release it:

    CGPathRelease(path);

Next we'll compute the area that was filled:

    double area = areaFilledInBitmapContext(gc);

Now we're done with the bitmap context, so we can release it:

    CGContextRelease(gc);

Finally, we can return the area we computed:

    return area;
}

Well, that was easy! But we have to write all those helper functions. Let's start at the top. Creating the path is trivial:

static CGPathRef createClosedPathWithPoints(const CGPoint *points, size_t count) {
    CGMutablePathRef path = CGPathCreateMutable();
    CGPathAddLines(path, NULL, points, count);
    CGPathCloseSubpath(path);
    return path;
}

Getting the integral bounding box of the path is also trivial:

static CGRect integralFrameForPath(CGPathRef path) {
    CGRect frame = CGPathGetBoundingBox(path);
    return CGRectIntegral(frame);
}

To choose the bytes per row of the bitmap, we could just use width of the path's bounding box. But I think Quartz likes to have bitmaps that are multiples of a nice power of two. I haven't done any testing on this, so you might want to experiment. For now, we'll round up the width to the next smallest multiple of 64:

static size_t bytesPerRowForWidth(CGFloat width) {
    static const size_t kFactor = 64;
    // Round up to a multiple of kFactor, which must be a power of 2.
    return ((size_t)width + (kFactor - 1)) & ~(kFactor - 1);
}

We create the bitmap context with the computed sizes. We also need to translate the origin of the coordinate system. Why? Because the origin of the path's bounding box might not be at (0, 0).

static CGContextRef createBitmapContextWithFrame(CGRect frame, size_t bytesPerRow) {
    CGColorSpaceRef grayscale = CGColorSpaceCreateDeviceGray();
    CGContextRef gc = CGBitmapContextCreate(NULL, frame.size.width, frame.size.height, 8, bytesPerRow, grayscale, kCGImageAlphaNone);
    CGColorSpaceRelease(grayscale);
    CGContextTranslateCTM(gc, -frame.origin.x, -frame.origin.x);
    return gc;
}

Finally, we need to write the helper that actually counts the filled pixels. We have to decide how we want to count pixels. Each pixel is represented by one unsigned 8-bit integer. A black pixel is 0. A white pixel is 255. The numbers in between are shades of gray. Quartz anti-aliases the edge of the path when it fills it using gray pixels. So we have to decide how to count those gray pixels.

One way is to define a threshold, like 128. Any pixel at or above the threshold counts as filled; the rest count as unfilled.

Another way is to count the gray pixels as partially filled, and add up that partial filling. So two exactly half-filled pixels get combined and count as a single, entirely-filled pixel. Let's do it that way:

static double areaFilledInBitmapContext(gc) {
    size_t width = CGBitmapContextGetWidth(gc);
    size_t height = CGBitmapContextGetHeight(gc);
    size_t stride = CGBitmapContextGetBytesPerRow(gc);
    uint8_t *pixels = CGBitmapContextGetData(gc);
    uint64_t coverage = 0;
    for (size_t y = 0; y < height; ++y) {
        for (size_t x = 0; x < width; ++x) {
            coverage += pixels[y * stride + x];
        }
    }
    return (double)coverage / UINT8_MAX;
}

You can find all of the code bundled up in this gist.




回答2:


I would grab the drawing as a CGIMage ...

(CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());

Then, as recommended above use a "Flood Fill" approach to count the pixels. (Google Flood Fill)



来源:https://stackoverflow.com/questions/14220719/ios-how-to-calculate-number-of-pixels-area-enclosed-by-a-curve

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!