问题
So I am working with the iOS 4.2 to add zoom and pan to my application. I have implemented an instance of the UIPinchGestureRecognizer and UIPanGestureRecognizer. It seems to me that only one of these is recognizing a gesture at a time. In particular, the latter only reacts when one finger is down, while the former reacts when the second finger is present. That is okay, but it has some side effects that I think make for inferior quality of user experience.
When you put two fingers down and then move one of them, the image expands (zooms in) like it should, but the pixels under the fingers are no longer under the finger. The image scales from the center of the image, not the mid point between the two fingers. And that center point is itself moving. I want that center point's movement to dictate the panning of the image overall.
Do nearly all iOS applications have this same behavior, where the image zooms in or out around the center of the image rather than the pixels under the fingers tracking the fingers?
It seems to me that creating a custom gesture recognizer is the correct design approach to this problem, but it also seems to me that someone would have created such a recognizer for commercially free download and use. Is there such a UIGestureRecognizer?
回答1:
Sorry, in a rush but this is the code I used for one of my demo apps, it can pinch zoom and pan at the same time without using scrollview.
Don't forget to conform to UIGestureRecognizerDelegate protocol
If you're not able to get both pinch and pan at the same time, maybe it's because you're missing this method:
-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
return YES;
}
Here is the full source code:
#import "ViewController.h"
#import <QuartzCore/QuartzCore.h>
@interface ViewController ()
@end
@implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
isEditing = false;
photoView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 320, 460)];
[photoView setImage:[UIImage imageNamed:@"photo.png"]];
photoView.hidden = YES;
maskView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 320, 460)];
[maskView setImage:[UIImage imageNamed:@"maskguide.png"]];
maskView.hidden = YES;
displayImage = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 320, 460)];
UIPanGestureRecognizer *panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:@selector(handlePan:)];
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:@selector(handlePinch:)];
[panGesture setDelegate:self];
[pinchGesture setDelegate:self];
[photoView addGestureRecognizer:panGesture];
[photoView addGestureRecognizer:pinchGesture];
[photoView setUserInteractionEnabled:YES];
[panGesture release];
[pinchGesture release];
btnEdit = [[UIButton alloc] initWithFrame:CGRectMake(60, 400, 200, 50)];
[btnEdit setBackgroundColor:[UIColor blackColor]];
[btnEdit setTitle:@"Start Editing" forState:UIControlStateNormal];
[btnEdit addTarget:self action:@selector(toggleEditing) forControlEvents:UIControlEventTouchUpInside];
[[self view] addSubview:displayImage];
[[self view] addSubview:photoView];
[[self view] addSubview:maskView];
[[self view] addSubview:btnEdit];
[self updateMaskedImage];
}
- (void)viewDidUnload
{
[super viewDidUnload];
// Release any retained subviews of the main view.
}
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown);
}
-(void)dealloc
{
[btnEdit release];
[super dealloc];
}
#pragma mark -
#pragma mark Update Masked Image Method
#pragma mark -
-(void)updateMaskedImage
{
maskView.hidden = YES;
UIImage *finalImage =
[self maskImage:[self captureView:self.view]
withMask:[UIImage imageNamed:@"mask.png"]];
maskView.hidden = NO;
//UIImage *finalImage = [self maskImage:photoView.image withMask:[UIImage imageNamed:@"mask.png"]];
[displayImage setImage:finalImage];
}
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
return [UIImage imageWithCGImage:masked];
}
#pragma mark -
#pragma mark Touches Began
#pragma mark -
// adjusts the editing flag to make dragging and drop work
-(void)toggleEditing
{
if(!isEditing)
{
isEditing = true;
NSLog(@"editing...");
[btnEdit setTitle:@"Stop Editing" forState:UIControlStateNormal];
displayImage.hidden = YES;
photoView.hidden = NO;
maskView.hidden = NO;
}
else
{
isEditing = false;
[self updateMaskedImage];
NSLog(@"stopped editting");
[btnEdit setTitle:@"Start Editing" forState:UIControlStateNormal];
displayImage.hidden = NO;
photoView.hidden = YES;
maskView.hidden = YES;
}
}
/*
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
if(isEditing)
{
UITouch *finger = [touches anyObject];
CGPoint currentPosition = [finger locationInView:self.view];
//[maskView setCenter:currentPosition];
//[photoView setCenter:currentPosition];
if([touches count] == 1)
{
[photoView setCenter:currentPosition];
}
else if([touches count] == 2)
{
}
}
}
*/
-(void)handlePan:(UIPanGestureRecognizer *)recognizer
{
CGPoint translation = [recognizer translationInView:self.view];
recognizer.view.center = CGPointMake(recognizer.view.center.x + translation.x,
recognizer.view.center.y + translation.y);
[recognizer setTranslation:CGPointMake(0, 0) inView:self.view];
}
-(void)handlePinch:(UIPinchGestureRecognizer *)recognizer
{
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
}
-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
return YES;
}
#pragma mark -
#pragma mark Capture Screen Function
#pragma mark -
- (UIImage*)captureView:(UIView *)yourView
{
UIGraphicsBeginImageContextWithOptions(yourView.bounds.size, yourView.opaque, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[yourView.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
#pragma mark -
@end
回答2:
So I created the custom gesture recognizer in light of no one giving me a better solution that achieved the desired results. Below are the key code fragments that allow the custom recognizer to indicate where the view should reposition and what its new scale should be with the centroid as the center of the pan and zoom effects so that the pixels under the fingers remain under the fingers at all time, unless the fingers appear to rotate, which is not supported and I can't do anything to stop them from such a gesture. This gesture recognizer pans and zooms simultaneously with two fingers. I need to add support later for one finger panning, even when one of two fingers is lifted up.
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
// We can only process if we have two fingers down...
if ( FirstFinger == nil || SecondFinger == nil )
return;
// We do not attempt to determine if the first finger, second finger, or
// both fingers are the reason for this method call. For this reason, we
// do not know if either is stale or updated, and thus we cannot rely
// upon the UITouch's previousLocationInView method. Therefore, we need to
// cache the latest UITouch's locationInView information each pass.
// Break down the previous finger coordinates...
float A0x = PreviousFirstFinger.x;
float A0y = PreviousFirstFinger.y;
float A1x = PreviousSecondFinger.x;
float A1y = PreviousSecondFinger.y;
// Update our cache with the current fingers for next pass through here...
PreviousFirstFinger = [FirstFinger locationInView:nil];
PreviousSecondFinger = [SecondFinger locationInView:nil];
// Break down the current finger coordinates...
float B0x = PreviousFirstFinger.x;
float B0y = PreviousFirstFinger.y;
float B1x = PreviousSecondFinger.x;
float B1y = PreviousSecondFinger.y;
// Calculate the zoom resulting from the two fingers moving toward or away from each other...
float OldScale = Scale;
Scale *= sqrt((B0x-B1x)*(B0x-B1x) + (B0y-B1y)*(B0y-B1y))/sqrt((A0x-A1x)*(A0x-A1x) + (A0y-A1y)*(A0y-A1y));
// Calculate the old and new centroids so that we can compare the centroid's movement...
CGPoint OldCentroid = { (A0x + A1x)/2, (A0y + A1y)/2 };
CGPoint NewCentroid = { (B0x + B1x)/2, (B0y + B1y)/2 };
// Calculate the pan values to apply to the view so that the combination of zoom and pan
// appear to apply to the centroid rather than the center of the view...
Center.x = NewCentroid.x + (Scale/OldScale)*(self.view.center.x - OldCentroid.x);
Center.y = NewCentroid.y + (Scale/OldScale)*(self.view.center.y - OldCentroid.y);
}
The view controller handles the events by assigning the new scale and center to the view in question. I noticed that other gesture recognizers tend to allow the controller to do some of the math, but I tried to do all the math in the recognizer.
-(void)handlePixelTrack:(PixelTrackGestureRecognizer*)sender
{
sender.view.center= sender.Center;
sender.view.transform = CGAffineTransformMakeScale(sender.Scale, sender.Scale);
}
回答3:
The easier solution is to put your view inside a scroll view. Then you get pinch and pan for free. Otherwise, you can set both your pan and pinch gesture delegates to self and return YES for shouldRecognizeSimultaneously. As for zooming into the center of the user's fingers, I never did solve that one correctly but it involves manipulating the anchorPoint
of your view's layer before changing its scale (I think).
来源:https://stackoverflow.com/questions/11855645/is-there-a-gesture-recognizer-that-handles-both-pinch-and-pan-together