Alpha Detection in Layer OK on Simulator, not iPhone

↘锁芯ラ 提交于 2019-12-01 10:22:50

问题


First, check out this very handy extension to CALayer from elsewhere on SO. It helps you determine if a point in a layer's contents-assigned CGImageRef is or isn't transparent.

n.b.: There is no guarantee about a layer's contents being representable or responding as if it was a CGImageRef. (This can have implications for broader use of the extension referenced above, granted.) In my case, however, I know that the layers I'm testing have contents that were assigned a CGImageRef. (Hopefully this can't change out from under me after assignment! Plus I notice that contents is retained.)

OK, back to the problem at hand. Here's how I'm using the extension. For starters, I've changed the selector from containsPoint: to containsNonTransparentPoint: (I need to keep the original method around.)

Now, I have a UIImageView subclass that uses seven CALayer objects. These are used for opacity-based animations (pulsing/glowing effects and on/off states). Each of those seven layers has a known CGImageRef in its contents that effectively "covers" (air quotes) one part of the entire view with its own swath of color. The rest of each image in its respective layer is transparent.

In the subclass, I register for single tap gestures. When one arrives, I walk through my layers to see which one was effectively tapped (that is, which one has a non-transparent point where I tapped, first one found wins) and then I can do whatever needs doing.

Here's how I handle the gesture:

- (IBAction)handleSingleTap:(UIGestureRecognizer *)sender {
    CGPoint tapPoint = [sender locationInView:sender.view];

    // Flip y so 0,0 is at lower left. (Required by layer method below.)
    tapPoint.y = sender.view.bounds.size.height - tapPoint.y;

    // Figure out which layer was effectively tapped. First match wins.
    for (CALayer *layer in myLayers) {
        if ([layer containsNonTransparentPoint:tapPoint]) {
            NSLog(@"%@ tapped at (%.0f, %.0f)", layer.name, tapPoint.x, tapPoint.y);

            // We got our layer! Do something useful with it.
            return;
        }
    }
}

The good news? All of this works beautifully on the iPhone Simulator with iOS 4.3.2. (FWIW, I'm on Lion running Xcode 4.1.)

However, on my iPhone 4 (with iOS 4.3.3), it doesn't even come close! None of my taps seem to match up with any of the layers I'd expect them to.

Even if I try the suggestion to use CGContextSetBlendMode when drawing into the 1x1 pixel context, no dice.

I am hoping it's pilot error, but I have yet to figure out what the disparity is. The taps do have a pattern but not a discernible one.

Perhaps there's a data boundary issue. Perhaps I have to do something other than flip the y coordinate to the lower-left of the image. Just not sure yet.

If anyone can please shed some light on what might be amiss, I would be most appreciative!

UPDATE, 22 September 2011: First ah-ha moment acquired! The problem isn't Simulator-vs-iPhone. It's Retina vs. Non-Retina! The same symptoms occur in the Simulator when using the Retina version. Perhaps the solution centers around scaling (CTM?) in some way/shape/form. The Quartz 2D Programming Guide also advises that "iOS applications should use UIGraphicsBeginImageContextWithOptions." I feel like I'm very close to the solution here!


回答1:


OK! First, the problem wasn't Simulator-vs-iPhone. Rather, it was Retina vs. Non-Retina. The same symptoms occur in the Simulator when using the Retina version. Right away, one starts to think the solution has to do with scaling.

A very helpful post over on the Apple Dev Quartz 2D forum (along similar "be mindful of scaling" lines) steered me toward a solution. Now, I'm the first to admit, this solution is NOT pretty, but it does work for Retina and Non-Retina cases.

With that, here's the revised code for the aforementioned CALayer extension:

//
// Checks image at a point (and at a particular scale factor) for transparency.
// Point must be with origin at lower-left.
//
BOOL ImagePointIsTransparent(CGImageRef image, CGFloat scale, CGPoint point) {
    unsigned char pixel[1] = {0};

    CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 1,
        NULL, kCGImageAlphaOnly);
    CGContextSetBlendMode(context, kCGBlendModeCopy);
    CGContextDrawImage(context, CGRectMake(-point.x, -point.y,
        CGImageGetWidth(image)/scale, CGImageGetHeight(image)/scale), image);

    CGContextRelease(context);
    CGFloat alpha = pixel[0]/255.0;
    return (alpha < 0.01);
}

@implementation CALayer (Extensions)

- (BOOL)containsNonTransparentPoint:(CGPoint)point scale:(CGFloat)scale {
    if (CGRectContainsPoint(self.bounds, point)) {
        if (!ImagePointIsTransparent((CGImageRef)self.contents, scale, point))
            return YES;
    }
    return NO;
}

@end

In short, we need to know about the scale. If we divide the image width and height by that scale, ta-dah, the hit test now works on Retina and Non-Retina devices!

What I don't like about this is the mess I've had to make of that poor selector, now called containsNonTransparentPoint:Scale:. As mentioned in the question, there is never any guarantee what a layer's contents will contain. In my case I am taking care to only use this on layers with a CGImageRef in there, but this won't fly in a more general/reusable case.

All this makes me wonder if CALayer is not the best place for this particular extension after all, at least in this new incarnation. Perhaps CGImage, with some layer smarts thrown in, would be cleaner. Imagine doing a hit test on a CGImage but returning the name of the first layer that had non-transparent content at that point. There's still the problem of not knowing which layers have CGImageRefs in them, so some hinting might be required. (Left as an exercise for yours truly and the reader!)

UPDATE: After some discussion with a developer at Apple, messing with layers in this fashion is in fact ill-advised. Contrary to what I previously learned (incorrectly?), multiple UIImageViews encapsulated within a UIView are the way to go here. (I always remember learning that you want to keep your views to a minimum. Perhaps in this case it isn't as big a deal.) Nevertheless, I'll keep this answer here for now, but will not mark it as correct. Once I try out and verify the other technique, I will share that here!



来源:https://stackoverflow.com/questions/7506248/alpha-detection-in-layer-ok-on-simulator-not-iphone

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!