Does anyone know what the limitations are on image size with custom CIFilters? I\'ve created a filter that performs as expected when the images are up to 2 mega pixels but then
Working with the CoreImage I discovered that it cuts big images to parts. For example, in your case 4k x 2k image can be splitted to 4 2k x 1k images and rendered separately. Unfortunately, this optimization tricks affects samplerCoord and some coordinate-depended filters work incorrectly on big images.
My solution was in using destCoord instead of samplerCoord. Of course, you should keep in mind that an image can be rendered in non-zero origin and destCoord. I wrote my own filter, so I was able to pass whole extent as a vec4 parameter.
Example: try generate an image with CIFilter, something like that:
float gray = (samplerCoord.x / samplerSize.width) * (samplerCoord.y / samplerSize.height);
This output should give us black color at (0,0) and white at (1,1), right? However, for big images you'll see few quads, not a single gradient. This happens due to optimized rendering coming from CoreImage engine, I haven't found a way to pass it, but you can re-write the kernel this way:
float gray = ((destCoord.x - rect.x) / rect.size) * ((destCoord.y - rect.y) / rect.height)
Where rect is real extent of the sampler you must pass. I used [inputImage extent] for this purpose, but it depends on filter and can be something other in your case.
Hope this explanation made it clear. Buy the way, it looks like system kernels work just fine even with big images, so you should worry about this tricks in your custom kernels only.