I am trying to use PIL in python to remove parts of images based on the pixels RGB values. From the documentation it would seem that the function point could do what I\'m lo
Something like the following would work:
source = im.split()
mask = source[2].point(lambda i: i < 100 and 255)
im = Image.merge(im.mode, source)
See the PIL Tutorial under the Point Operations heading for more information.
I think the easiest way to do this would be to use the Mahotas library, which lets you load images as NumPy ndarrays. Then you can just use logical indexing on your image.
import Mahotas as mh
import numpy as np
fname = "/home/stuff/images/my_image.jpg"
image = mh.imread(fname)
# Make a copy to play with the indices.
img = np.copy(image)
# Replace places with 3rd coordinate less than 100 with the white-color
# vector [255, 255, 255].
inds = img[:,:,2] < 100
img[inds] = [255,255,255]
The benefit is that Mahotas loads the image right to a Numpy array, which lets you easily slice the different dimensions in NumPy consistent syntax. Alternatively, if you really want to do the image I/O with PIL, then you should look for the functions in PIL that let you convert the image to a NumPy array, and then the above code will still work.
In general though, I have always had consistent issues getting PIL to genuinely work. It seems there is always some sort of image file type support issue, some decoder issue, or some other thing going wrong with PIL. It's very fussy. I generally try to avoid Python-OpenCV for the same reason. I prefer a workflow with scikits.learn, scikits.image, Mahotas, and PyPNG.