quartz-2d

iPhone: Blur UIImage

别等时光非礼了梦想. 提交于 2019-11-28 21:41:17
In my iPhone application I have a black-and-white UIImage . I need to blur that image (Gaussian blur would do). iPhone clearly knows how to blur images, as it does that when it draws shadows . However I did not found anything related in the API. Do I have to do blurring by hand, without hardware acceleration? dom Try this (found here ): @interface UIImage (ImageBlur) - (UIImage *)imageWithGaussianBlur; @end @implementation UIImage (ImageBlur) - (UIImage *)imageWithGaussianBlur { float weight[5] = {0.2270270270, 0.1945945946, 0.1216216216, 0.0540540541, 0.0162162162}; // Blur horizontally

Rotate CGImage taken from video frame

和自甴很熟 提交于 2019-11-28 19:32:31
This is Apple's code (from Technical Q&A QA1702) for getting a UIImage from a video buffer. Unfortunately, the image returned is rotated 90 degrees. How do I edit this so that the image returned is correctly oriented? - (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(imageBuffer, 0); void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height

How to set up a user Quartz2D coordinate system with scaling that avoids fuzzy drawing?

柔情痞子 提交于 2019-11-28 18:03:44
This topic has been scratched once or twice, but I am still puzzled. And Google was not friendly either. Since Quartz allows for arbitrary coordinate systems using affine transforms, I want to be able to draw things such as floorplans using real-life coordinate, e.g. feet. So basically, for the sake of an example, I want to scale the view so that when I draw a 10x10 rectangle (think a 10-inch box for example), I get a 60x60 pixels rectangle. It works, except the rectangle I get is quite fuzzy. Another question here got an answer that explains why. However, I'm not sure I understood that reason

How to implement alpha gradient on a image?

喜夏-厌秋 提交于 2019-11-28 17:58:40
I want to implement alpha gradient on an image. From 0.5 alfa on top of the image to 0.0 on bottom. Any advice, tutorial, link is welcome. You can use CGImageCreateWithMask to apply a masking image to it. You could generate an appropriate mask simply enough by drawing to a greyscale or alpha-only CGBitmapContext with CGContextDrawLinearGradient . If it's being displayed as the content of a CALayer, you could apply an appropriate masking layer to the parent layer's mask property. You could use a CAGradientLayer with appropriate colors to create this mask. You can draw the image to a

What's the difference between Quartz Core, Core Graphics and Quartz 2D?

浪子不回头ぞ 提交于 2019-11-28 15:20:44
I wonder if someone can distinguish precisely between these? For my understanding, Core Graphics is just a "Framework Package" which contains Quartz Core and Quartz 2D. But I'm not sure about if Quartz 2D actually is Quartz Core? Maybe someone can draw some lines there? What makes up the differences between these? When looking at the documentation, I see Quartz Core is listing up all the Core Animation stuff only. So Quartz Core == Core Animation? Brad Larson From the Quartz 2D Programming Guide : The Quartz 2D API is part of the Core Graphics framework, so you may see Quartz referred to as

undo redo issues with CGLayer

a 夏天 提交于 2019-11-28 13:08:00
I am working with unod redo operations on CgLayer, I have tried some code, but not able to get it working, dont know , where I am getting wrong, below is my code, which i have written this is my drawRect function - (void)drawRect:(CGRect)rect { m_backgroundImage = [UIImage imageNamed:@"bridge.jpg"]; CGPoint drawingTargetPoint = CGPointMake(0,0); [m_backgroundImage drawAtPoint:drawingTargetPoint]; switch(drawStep) { case DRAW: { CGContextRef context = UIGraphicsGetCurrentContext(); if(myLayerRef == nil) { myLayerRef = CGLayerCreateWithContext(context, self.bounds.size, NULL); }

On iOS, after we create a layer from context and get the layer's context, how do these contexts relate to each other?

两盒软妹~` 提交于 2019-11-28 08:56:50
问题 We can create a layer from the current graphics context and then get the layer's context: CGContextRef context = UIGraphicsGetCurrentContext(); CGLayerRef layer = CGLayerCreateWithContext(context, CGSizeMake(self.frame.size.width, self.frame.size.height), NULL); CGContextRef contextOfLayer = CGLayerGetContext(layer); So we now have 2 contexts: context and contextOfLayer . How do these two contexts relate to each other? Is contextOfLayer actually part of context and context has a array of

Draw glow around inside edge of multiple CGPaths

余生颓废 提交于 2019-11-28 03:43:24
If I create a CGMutablePathRef by adding together two circular paths as shown by the left image, is it possible to obtain a final CGPathRef which represents only the outer border as shown by the right image? Thanks for any help! What you are asking for is the union of bezier paths. Apple doesn't ship any APIs for computing the union of paths. It is in fact a rather complicated algorithm. Here are a couple of links: http://www.cocoadev.com/index.pl?NSBezierPathcombinatorics http://losingfight.com/blog/2011/07/09/how-to-implement-boolean-operations-on-bezier-paths-part-3/ If you explain what you

Create PDF Annotations in iOS

筅森魡賤 提交于 2019-11-28 03:36:36
I've been working on a pdf viewer with support for annotations and I need to be able to save new annotations that the user has created. I've seen tons of examples on how to draw text/lines/images, but that's only flattened content, I need to create actual annotation objects I've found no documentation or examples about it, so if anyone could point me in the right direction I would be extremely grateful Cheers! Edit: After several months of work we could release the v1 of this. We ended up using an open source c++ library, and went through a huge pain to make it compile for iOS. The one in

Efficient method to draw a line with millions of points

做~自己de王妃 提交于 2019-11-28 03:36:25
I'm writing an audio waveform editor in Cocoa with a wide range of zoom options. At its widest, it shows a waveform for an entire song (~10 million samples in view). At its narrowest, it shows a pixel accurate representation of the sound wave (~1 thousand samples in a view). I want to be able to smoothly transition between these zoom levels. Some commercial editors like Ableton Live seem to do this in a very inexpensive fashion. My current implementation satisfies my desired zoom range, but is inefficient and choppy. The design is largely inspired by this excellent article on drawing waveforms