I\'m making magnifier app, which allows an user touch the screen and move his finger, there will be a magnifier with his finger path. I implement it with take a screenshot a
No. In iOS6, renderInContext: is the only way. It is slow. It uses the CPU.
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
view.layer
, which captures the final frame of the animation.view.presentationLayer
, which captures the current frame of the animation . UIView *snapshot = [view snapshotViewAfterScreenUpdates:YES];
contents
are immutable. Not good if you want to apply an effect.[view resizableSnapshotViewFromRect:rect afterScreenUpdates:YES withCapInsets:edgeInsets]
snapshotViewAfterScreenUpdates:
but with resizable insets. content
is also immutable. [view drawViewHierarchyInRect:rect afterScreenUpdates:YES];
renderInContext:
. See WWDC 2013 Session 226 Implementing Engaging UI on iOS about the new snapshotting APIs.
If it is any help, here is some code to discard capture attempts while one is still running.
This throttles block execution to one at a time, and discards others. From this SO answer.
dispatch_semaphore_t semaphore = dispatch_semaphore_create(1);
dispatch_queue_t renderQueue = dispatch_queue_create("com.throttling.queue", NULL);
- (void) capture {
if (dispatch_semaphore_wait(semaphore, DISPATCH_TIME_NOW) == 0) {
dispatch_async(renderQueue, ^{
// capture
dispatch_semaphore_signal(semaphore);
});
}
}
What is this doing?
DISPATCH_TIME_NOW
means the timeout is none, so it returns non zero immediately on red light. Thus, not executing the if content.