问题
I have researched all over in order to find a kernel that performs adaptive thresholding on iOS. Unfortunately I do not understand the kernel language or the logic behind it. Below, I have found a routine that performs thresholding (https://gist.github.com/xhruso00/a3f8a9c8ae7e33b8b23d)
static NSString * const kKernelSource = @"kernel vec4 thresholdKernel(sampler image)\n"
"{\n"
" float inputThreshold = 0.05;\n"
" float pass = 1.0;\n"
" float fail = 0.0;\n"
" const vec4 vec_Y = vec4( 0.299, 0.587, 0.114, 0.0 );\n"
" vec4 src = unpremultiply( sample(image, samplerCoord(image)) );\n"
" float Y = dot( src, vec_Y );\n"
" src.rgb = vec3( compare( Y - inputThreshold, fail, pass));\n"
" return premultiply(src);\n"
"}";
Is it possible to rewrite this into an adaptive thresholding kernel? The image I am supplying to it has been turned into B&W and has already been blurred. Are there any resources you could point me to? I would like to stick with CoreImage as my whole stack is built around it.
Edit: The best example / reference from what I am trying to achieve has been implemented in GPUImage's GPUImageAdaptiveThresholdFilter - https://github.com/BradLarson/GPUImage/blob/c5f0914152419437869c35e29858773b1a06083c/framework/Source/GPUImageAdaptiveThresholdFilter.m
回答1:
Simon's Filter is the right approach to achieve the desired effect, however, you have to modify a couple of things.
First of all, switch the order of imageLuma
and thresholdLuma
, since we want black letters to remain black and not the other way around. Also, you should add a constant (I chose 0.01
) to remove noise.
var thresholdKernel = CIColorKernel(string:
"kernel vec4 thresholdFilter(__sample image, __sample threshold)" +
"{" +
" float imageLuma = dot(image.rgb, vec3(0.2126, 0.7152, 0.0722));" +
" float thresholdLuma = dot(threshold.rgb, vec3(0.2126, 0.7152, 0.0722));" +
" return vec4(vec3(step(thresholdLuma, imageLuma+0.001)), 1);"
"}"
override var outputImage: CIImage! {
guard let inputImage = inputImage,
let thresholdKernel = thresholdKernel else {
return nil
}
let blurred = inputImage.applyingFilter("CIBoxBlur", withInputParameters: [kCIInputRadiusKey: 5]) // block size
let extent = inputImage.extent
let arguments = [inputImage, blurred]
return thresholdKernel.apply(withExtent: extent, arguments: arguments)
}
And this is, what you get Only using Apple's Core Image, without having to install any external libraries :)
Of course, you can play around a little with the values of constant and block size.
回答2:
How does this look: I've used the CoreImage CIBoxBlur
(although the dedicated convolution filters may be faster) and passed the output of that into my existing threshold filter.
class AdaptiveThresholdFilter: CIFilter
{
var inputImage : CIImage?
var thresholdKernel = CIColorKernel(string:
"kernel vec4 thresholdFilter(__sample image, __sample threshold)" +
"{" +
" float imageLuma = dot(image.rgb, vec3(0.2126, 0.7152, 0.0722));" +
" float thresholdLuma = dot(threshold.rgb, vec3(0.2126, 0.7152, 0.0722));" +
" return vec4(vec3(step(imageLuma, thresholdLuma)), 1.0);" +
"}"
)
override var outputImage: CIImage!
{
guard let inputImage = inputImage,
thresholdKernel = thresholdKernel else
{
return nil
}
let blurred = inputImage.imageByApplyingFilter("CIBoxBlur",
withInputParameters: [kCIInputRadiusKey: 9])
let extent = inputImage.extent
let arguments = [inputImage, blurred]
return thresholdKernel.applyWithExtent(extent, arguments: arguments)
}
}
I found this image of a shaded page and with this code:
let page = CIImage(image: UIImage(named: "son1.gif")!)
let filter = AdaptiveThresholdFilter()
filter.inputImage = page
let final = filter.outputImage
I got this result:
Cheers!
Simon
来源:https://stackoverflow.com/questions/36184255/adaptive-threshold-cikernel-cifilter-ios