We\'re trying to do the following in Mathematica - RMagick remove white background from image and make it transparent.
But with actual photos it ends up looking lous
I am completely new to image processing but here is what I get after some playing with new morphological image processing functions of version 8:
mask = DeleteSmallComponents[
ColorNegate@
Image[MorphologicalComponents[ColorNegate@img, .062,
Method -> "Convex"], "Bit"], 10000];
Show[Graphics[Rectangle[], Background -> Red,
PlotRangePadding -> None], SetAlphaChannel[img, ColorNegate@mask]]
Perhaps, depending on the edge quality you need:
img = Import@"http://i.stack.imgur.com/k7E1F.png";
mask = ChanVeseBinarize[img, TargetColor -> {1., 1., 1.}, "LengthPenalty" -> 10]
mask1 = Blur[Erosion[ColorNegate[mask], 2], 5]
Rasterize[SetAlphaChannel[img, mask1], Background -> None]
Edit
Stealing a bit from @Szabolcs
img2 = Import@"http://i.stack.imgur.com/k7E1F.png";
(*key point:scale up image to smooth the edges*)
img = ImageResize[img2, 4 ImageDimensions[img2]];
mask = ChanVeseBinarize[img, TargetColor -> {1., 1., 1.}, "LengthPenalty" -> 10];
mask1 = Blur[Erosion[ColorNegate[mask], 8], 10];
f[col_] := Rasterize[SetAlphaChannel[img, mask1], Background -> col,
ImageSize -> ImageDimensions@img2]
GraphicsGrid[{{f@Red, f@Blue, f@Green}}]
Click to enlarge
Edit 2
Just to get an idea of the extent of the halo and background imperfections in the image:
img = Import@"http://i.stack.imgur.com/k7E1F.png";
Join[{img}, MapThread[Binarize, {ColorSeparate[img, "HSB"], {.01, .01, .99}}]]
ColorNegate@ImageAdd[EntropyFilter[img, 1] // ImageAdjust, ColorNegate@img]
I recommend using Photoshop for this and saving as a PNG.
Here's a try at implementing Mark Ransom's approach, with some help from belisarius's mask generation:
Locate the boundary of the object:
img1 = SetAlphaChannel[img, 1];
erosionamount=2;
mb = ColorNegate@ChanVeseBinarize[img, TargetColor -> {1., 1., 1},
"LengthPenalty" -> 10];
edge = ImageSubtract[Dilation[mb, 2], Erosion[mb, erosionamount]];
ImageApply[{1, 0, 0} &, img, Masking ->edge]
Set the alpha values:
edgealpha = ImageMultiply[ImageFilter[(1 - Mean[Flatten[#]]^5) &,
ColorConvert[img, "GrayScale"], 2, Masking -> edge], edge];
imagealpha = ImageAdd[edgealpha, Erosion[mb, erosionamount]];
img2 = SetAlphaChannel[img, imagealpha];
Reverse color blend:
img3 = ImageApply[Module[{c, \[Alpha], bc, fc},
bc = {1, 1, 1};
c = {#[[1]], #[[2]], #[[3]]};
\[Alpha] = #[[4]];
If[\[Alpha] > 0, Flatten[{(c - bc (1 - \[Alpha]))/\[Alpha], \[Alpha]}], {0., 0.,
0., 0}]] &, img2];
Show[img3, Background -> Pink]
Notice how some of the edges have white fuzz? Compare that with the red outline in the first image. We need a better edge detector. Increasing the erosion amount helps with the fuzz, but then other sides become too transparent, so there is a tradeoff on the width of the edge mask. It's pretty good, though, considering there is no blur operation, per se.
It would be instructive to run the algorithm on a variety of images to test its robustness, to see how automatic it is.
Just replace any pixel that is "almost close to white" with a pixel of the same RGB color and a Sigmoid gradient on the transparency channel. You can apply linear transition from solid to transparent, but Sinusoid or Sigmoid or Tanh look more natural, depending on the sharpness of edge you are looking for, they rapidly move away from the medium to either solid or transparent, but not in stepwise/binary manner, which is what you have now.
Think of it this way:
Let's say R,G,B are each 0.0-1.0, then let's represent white as a single number as R+G+B=1.0*3=3.0.
Taking a little bit of each color out makes it a little "off-white", but taking a little of all 3 is taking it a lot more off than a little off any one. Let's say that you allow a 10% reduction on any one channel: 1.0*.10 = .1, Now spread this loss across all three and bind it between 0 and 1 for alpha channel, if it's less than .1, such that (loss=0.9)=>0 and (loss=1.0)=>1:
threshold=.10;
maxLoss=1.0*threshold;
loss=3.0-(R+G+B);
alpha=If[loss>maxLoss,0,loss/maxLoss];
(* linear scaling is used above *)
(* or use 1/(1 + Exp[-10(loss - 0.5maxLoss)/maxLoss]) to set sigmoid alpha *)
(* Log decay: Log[maxLoss]/Log[loss]
(for loss and maxLoss <1, when using RGB 0-255, divide by 255 to use this one *)
setNewPixel[R,G,B,alpha];
For reference:
maxLoss = .1;
Plot[{ 1/(1 + Exp[-10(loss - 0.5maxLoss)/maxLoss]),
Log[maxLoss]/Log[loss],
loss/maxLoss
}, {loss, 0, maxLoss}]
The only danger (or benefit?) you have in this, is that this does not care about whites which actually ARE part of the photo. It removes all whites. So that if you have a picture of white car, it'll end up having transparent patches in it. But from your example, that seems to be a desired effect.
Possible steps you could take: