I am working with text recognition on tires. In order to use an OCR, I must first get a clear binary map.
I have processed images and the text appears with broken an
I have played a bit with your input
Normalization of lighting + dynamic range normalization helps a bit to obtain much better results but still far away from needed one. I would like to try sharpening of partial derivations to boost the letters from background and treshold out small bumps before integrate back and recolor to mask image when I will have the time (not sure when maybe tomorow) I will edit this (and comment/notify you)
normalized lighting
compute average corners intensity and bilinear-ly rescale the intensities to match average color
if you need something more sophisticated see:
edge detection
partial derivation of intensity i
by x
and y
...
i=|i(x,y)/dx|+|i(x,y)/dy|
and then tresholded by treshold=13
[notes]
To eliminate most noise I applied smooth filtering before edge detection
[edit1] after some analysis I found your image has poor edges for sharpening integration
Here example of intensity graph after first derivation by x in the middle line of image
As you can see the black areas are fine but the white-ish ones are almost non recognizable from background noise. So your only hope is to use the min max filtering as @Daniel answer suggested and take more weight on black edge regions (white are not reliable)
min max filter emphasize the black (blue mask) and white (red mask) regions. If booth areas would be reliable then you just fill the space between them but that is not an option in your case instead I would enlarge the areas (weighted more on blue mask) and OCR the result with OCR customized for such 3 color input.
you could also take 2 images with different light position and fixed camera and combine them to cover the recognizable black area from all sides
[edit2] C++ source code for the last method
//---------------------------------------------------------------------------
typedef union { int dd; short int dw[2]; byte db[4]; } color;
picture pic0,pic1,pic2; // pic0 source image,pic1 normalized+min/max,pic2 enlarge filter
//---------------------------------------------------------------------------
void filter()
{
int sz=16; // [pixels] square size for corner avg color computation (c00..c11)
int fs0=5; // blue [pixels] font thickness
int fs1=2; // red [pixels] font thickness
int tr0=320; // blue min treshold
int tr1=125; // red max treshold
int x,y,c,cavg,cmin,cmax;
pic1=pic0; // copy source image
pic1.rgb2i(); // convert to grayscale intensity
for (x=0;x<5;x++) pic1.ui_smooth();
cavg=pic1.ui_normalize();
// min max filter
cmin=pic1.p[0][0].dd; cmax=cmin;
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
{
c=pic1.p[y][x].dd;
if (cmin>c) cmin=c;
if (cmax<c) cmax=c;
}
// treshold min/max
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
{
c=pic1.p[y][x].dd;
if (cmax-c<tr1) c=0x00FF0000; // red
else if (c-cmin<tr0) c=0x000000FF; // blue
else c=0x00000000; // black
pic1.p[y][x].dd=c;
}
pic1.rgb_smooth(); // remove single dots
// recolor image
pic2=pic1; pic2.clear(0);
pic2.bmp->Canvas->Pen ->Color=clWhite;
pic2.bmp->Canvas->Brush->Color=clWhite;
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
{
c=pic1.p[y][x].dd;
if (c==0x00FF0000)
{
pic2.bmp->Canvas->Pen ->Color=clRed;
pic2.bmp->Canvas->Brush->Color=clRed;
pic2.bmp->Canvas->Ellipse(x-fs1,y-fs1,x+fs1,y+fs1); // red
}
if (c==0x000000FF)
{
pic2.bmp->Canvas->Pen ->Color=clBlue;
pic2.bmp->Canvas->Brush->Color=clBlue;
pic2.bmp->Canvas->Ellipse(x-fs0,y-fs0,x+fs0,y+fs0); // blue
}
}
}
//---------------------------------------------------------------------------
int picture::ui_normalize(int sz=32)
{
if (xs<sz) return 0;
if (ys<sz) return 0;
int x,y,c,c0,c1,c00,c01,c10,c11,cavg;
// compute average intensity in corners
for (c00=0,y= 0;y< sz;y++) for (x= 0;x< sz;x++) c00+=p[y][x].dd; c00/=sz*sz;
for (c01=0,y= 0;y< sz;y++) for (x=xs-sz;x<xs;x++) c01+=p[y][x].dd; c01/=sz*sz;
for (c10=0,y=ys-sz;y<ys;y++) for (x= 0;x< sz;x++) c10+=p[y][x].dd; c10/=sz*sz;
for (c11=0,y=ys-sz;y<ys;y++) for (x=xs-sz;x<xs;x++) c11+=p[y][x].dd; c11/=sz*sz;
cavg=(c00+c01+c10+c11)/4;
// normalize lighting conditions
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
{
// avg color = bilinear interpolation of corners colors
c0=c00+(((c01-c00)*x)/xs);
c1=c10+(((c11-c10)*x)/xs);
c =c0 +(((c1 -c0 )*y)/ys);
// scale to avg color
if (c) p[y][x].dd=(p[y][x].dd*cavg)/c;
}
// compute min max intensities
for (c0=0,c1=0,y=0;y<ys;y++)
for (x=0;x<xs;x++)
{
c=p[y][x].dd;
if (c0>c) c0=c;
if (c1<c) c1=c;
}
// maximize dynamic range <0,765>
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
c=((p[y][x].dd-c0)*765)/(c1-c0);
return cavg;
}
//---------------------------------------------------------------------------
void picture::rgb_smooth()
{
color *q0,*q1;
int x,y,i;
color c0,c1,c2;
if ((xs<2)||(ys<2)) return;
for (y=0;y<ys-1;y++)
{
q0=p[y ];
q1=p[y+1];
for (x=0;x<xs-1;x++)
{
c0=q0[x];
c1=q0[x+1];
c2=q1[x];
for (i=0;i<4;i++) q0[x].db[i]=WORD((WORD(c0.db[i])+WORD(c0.db[i])+WORD(c1.db[i])+WORD(c2.db[i]))>>2);
}
}
}
//---------------------------------------------------------------------------
I use my own picture class for images so some members are:
xs,ys
size of image in pixelsp[y][x].dd
is pixel at (x,y)
position as 32 bit integer typeclear(color)
- clears entire imageresize(xs,ys)
- resizes image to new resolutionbmp
- VCL encapsulated GDI Bitmap with Canvas accessI added source just for 2 relevant member functions (no need to copy whole class here)
[edit3] LQ image
The best setting I found (code is the same):
int sz=32; // [pixels] square size for corner avg color computation (c00..c11)
int fs0=2; // blue [pixels] font thickness
int fs1=2; // red [pixels] font thickness
int tr0=52; // blue min treshold
int tr1=0; // red max treshold
Due to lighting conditions the red area is unusable (turned off)
You could apply first a max-filter (assign to each pixel in a new image the maximum value from a neighborhood around the same pixel in the original image), then a min-filter (assign minimum from neighborhood in max-image). Especially if you shape the neighborhood a bit wider than it is high (say, 2 or 3 pixels to the right/left, 1 pixel top/bottom), you should be able to get some of your characters (your image appears to mainly show gaps in the horizontal direction).
Optimal neighborhood size and shape depend on your specific problem, so you'll have to experiment some. You might experience glueing characters together by this operation - you'll possibly have to detect the blobs and split them if they're too wide compared to the other blobs.
edit: Also, binarization settings are absolutely key. Try several different binarization algorithms (Otsu, Sauvola, ...) to see which one (and which parameters) works best for you.