I need to get a pure black and white UIImage from another UIImage (not grayscale). Anyone can help me?
Thanks for reading.
EDITED:
Here
This code may help:
for (int i = 0; i < image.size.width * image.size.height * 4; i += 4) {
if (dataBitmap[i + 0] >= dataBitmap[i + 1] && dataBitmap[i + 0] >= dataBitmap[i + 2]){
dataBitmap[i + 1] = dataBitmap[i + 0];
dataBitmap[i + 2] = dataBitmap[i + 0];
}
else if (dataBitmap[i + 1] >= dataBitmap[i + 0] && dataBitmap[i + 1] >= dataBitmap[i + 2]) {
dataBitmap[i + 0] = dataBitmap[i + 1];
dataBitmap[i + 2] = dataBitmap[i + 1];
}
else {
dataBitmap[i + 0] = dataBitmap[i + 2];
dataBitmap[i + 1] = dataBitmap[i + 2];
}
}
The code worked right for me,just need some tweak ...here are few changes I made to work it properly by assigning value to dataBitmap[] array
's zeroth index...
for (int i = 0; i < image.size.width * image.size.height * 4; i += 4) {
//here an index for zeroth element is assigned
if ((dataBitmap[i + 0]+dataBitmap[i + 1] + dataBitmap[i + 2] + dataBitmap[i + 3]) < (255 * 4 / 2)) {
// multiply four,instead of three
dataBitmap[i + 0] = 0;
dataBitmap[i + 1] = 0;
dataBitmap[i + 2] = 0;
dataBitmap[i + 3] = 0;
} else {
dataBitmap[i + 0] = 255;
dataBitmap[i + 1] = 255;
dataBitmap[i + 2] = 255;
dataBitmap[i + 3] = 255;
}
}
Hope it will work.
Here's a swift 3 solution:
class func pureBlackAndWhiteImage(_ inputImage: UIImage) -> UIImage? {
guard let inputCGImage = inputImage.cgImage, let context = getImageContext(for: inputCGImage), let data = context.data else { return nil }
let white = RGBA32(red: 255, green: 255, blue: 255, alpha: 255)
let black = RGBA32(red: 0, green: 0, blue: 0, alpha: 255)
let width = Int(inputCGImage.width)
let height = Int(inputCGImage.height)
let pixelBuffer = data.bindMemory(to: RGBA32.self, capacity: width * height)
for x in 0 ..< height {
for y in 0 ..< width {
let offset = x * width + y
if pixelBuffer[offset].red > 0 || pixelBuffer[offset].green > 0 || pixelBuffer[offset].blue > 0 {
pixelBuffer[offset] = black
} else {
pixelBuffer[offset] = white
}
}
}
let outputCGImage = context.makeImage()
let outputImage = UIImage(cgImage: outputCGImage!, scale: inputImage.scale, orientation: inputImage.imageOrientation)
return outputImage
}
class func getImageContext(for inputCGImage: CGImage) ->CGContext? {
let colorSpace = CGColorSpaceCreateDeviceRGB()
let width = inputCGImage.width
let height = inputCGImage.height
let bytesPerPixel = 4
let bitsPerComponent = 8
let bytesPerRow = bytesPerPixel * width
let bitmapInfo = RGBA32.bitmapInfo
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
print("unable to create context")
return nil
}
context.setBlendMode(.copy)
context.draw(inputCGImage, in: CGRect(x: 0, y: 0, width: CGFloat(width), height: CGFloat(height)))
return context
}
struct RGBA32: Equatable {
var color: UInt32
var red: UInt8 {
return UInt8((color >> 24) & 255)
}
var green: UInt8 {
return UInt8((color >> 16) & 255)
}
var blue: UInt8 {
return UInt8((color >> 8) & 255)
}
var alpha: UInt8 {
return UInt8((color >> 0) & 255)
}
init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
color = (UInt32(red) << 24) | (UInt32(green) << 16) | (UInt32(blue) << 8) | (UInt32(alpha) << 0)
}
static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
}
func ==(lhs: RGBA32, rhs: RGBA32) -> Bool {
return lhs.color == rhs.color
}
If what you're looking for is to threshold the image -- everything brighter than a certain value turns white, everything darker turns black, and you pick the value -- then a library like GPU Image will work for you.
While it may be overkill for your purposes, I do just that for live video from the iPhone camera in my sample application here. That application takes a color and a sensitivity, and can turn all pixels white that are within that threshold and transparent if not. I use OpenGL ES 2.0 programmable shaders for this in order to get realtime responsiveness. The whole thing is described in this post here.
Again, this is probably overkill for what you want. In the case of a simple UIImage that you want to convert to black and white, you can probably read in the raw pixels, iterate through them, and apply the same sort of thresholding I did to output the final image. This won't be as fast as the shader approach, but it will be much simpler to code.
With Swift 3, I was able to accomplish this effect by using CIFilters, first by applying CIPhotoEffectNoir
(to make it grayscale) and then applying the CIColorControl
filter with the kCIInputContrastKey
input parameter set to a high value (i.e. 50). Setting the kCIInputBrightnessKey
parameter will also adjust how intense the black-and-white contrast appears, negative for a darker image, and positive for a brighter image. For example:
extension UIImage {
func toBlackAndWhite() -> UIImage? {
guard let ciImage = CIImage(image: self) else {
return nil
}
guard let grayImage = CIFilter(name: "CIPhotoEffectNoir", withInputParameters: [kCIInputImageKey: ciImage])?.outputImage else {
return nil
}
let bAndWParams: [String: Any] = [kCIInputImageKey: grayImage,
kCIInputContrastKey: 50.0,
kCIInputBrightnessKey: 10.0]
guard let bAndWImage = CIFilter(name: "CIColorControls", withInputParameters: bAndWParams)?.outputImage else {
return nil
}
guard let cgImage = CIContext(options: nil).createCGImage(bAndWImage, from: bAndWImage.extent) else {
return nil
}
return UIImage(cgImage: cgImage)
}
}