I am working on an iOS App that visualizes data as a line-graph. The graph is drawn as a CGPath
in a fullscreen custom UIView
and contains at most 320
tl;dr: You can set the drawsAsynchronously
property of the underlying CALayer
, and your CoreGraphics calls will use the GPU for rendering.
There is a way to control the rendering policy in CoreGraphics. By default, all CG calls are done via CPU rendering, which is fine for smaller operations, but is hugely inefficient for larger render jobs.
In that case, simply setting the drawsAsynchronously
property of the underlying CALayer
switches the CoreGraphics rendering engine to a GPU, Metal-based renderer and vastly improves performance. This is true on both macOS and iOS.
I ran a few performance comparisons (involving several different CG calls, including CGContextDrawRadialGradient
, CGContextStrokePath
, and CoreText rendering using CTFrameDraw
), and for larger render targets there was a massive performance increase of over 10x.
As can be expected, as the render target shrinks the GPU advantage fades until at some point (generally for render target smaller than 100x100 or so pixels), the CPU actually achieves a higher framerate than the GPU. YMMV and of course this will depend on CPU/GPU architectures and such.