My code works fine for normal devices but creates blurry images on retina devices.
Does anybody know a solution for my issue?
+ (UIImage *) imageWith
Here's a Swift 4 UIView extension based on the answer from @Dima.
extension UIView {
func snapshotImage() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(bounds.size, isOpaque, 0)
drawHierarchy(in: bounds, afterScreenUpdates: false)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
Some times drawRect Method makes problem so I got these answers more appropriate. You too may have a look on it Capture UIImage of UIView stuck in DrawRect method
extension UIView {
func getSnapshotImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(bounds.size, isOpaque, 0)
drawHierarchy(in: bounds, afterScreenUpdates: false)
let snapshotImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return snapshotImage
}
}
Switch from use of UIGraphicsBeginImageContext
to UIGraphicsBeginImageContextWithOptions
(as documented on this page). Pass 0.0 for scale (the third argument) and you'll get a context with a scale factor equal to that of the screen.
UIGraphicsBeginImageContext
uses a fixed scale factor of 1.0, so you're actually getting exactly the same image on an iPhone 4 as on the other iPhones. I'll bet either the iPhone 4 is applying a filter when you implicitly scale it up or just your brain is picking up on it being less sharp than everything around it.
So, I guess:
#import <QuartzCore/QuartzCore.h>
+ (UIImage *)imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
And in Swift 4:
func image(with view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}
public extension UIView {
@available(iOS 10.0, *)
public func renderToImage(afterScreenUpdates: Bool = false) -> UIImage {
let rendererFormat = UIGraphicsImageRendererFormat.default()
rendererFormat.opaque = isOpaque
let renderer = UIGraphicsImageRenderer(size: bounds.size, format: rendererFormat)
let snapshotImage = renderer.image { _ in
drawHierarchy(in: bounds, afterScreenUpdates: afterScreenUpdates)
}
return snapshotImage
}
}
UIGraphicsImageRenderer
is a relatively new API, introduced in iOS 10. You construct a UIGraphicsImageRenderer by specifying a point size. The image method takes a closure argument and returns a bitmap that results from executing the passed closure. In this case, the result is the original image scaled down to draw within the specified bounds.
https://nshipster.com/image-resizing/
So be sure the size you are passing into UIGraphicsImageRenderer
is points, not pixels.
If your images are larger than you are expecting, you need to divide your size by the scale factor.