I need help to identify the border and compare the images with the original image. I need guidance on How can I achieve this through processing or matlab or anything for beginne
If you want to detect your object in an environment more complex (rotation, deformation, scaling, perspective), you need a detection method more efficient. I suggest you to see what is called a "cascade classifier for Haar features" OpenCv can propose you a lot of function to do this method rapidly. See this useful page
Or even by matlab you can see this example
The "multiple image" you showed is easy enough to handle using just simple image processing, no need for template matching :)
% read the second image
img2 = imread('http://i.stack.imgur.com/zyHuj.jpg');
img2 = im2double(rgb2gray(img2));
% detect coca-cola logos
bw = im2bw(img2); % Otsu's thresholding
bw = imfill(~bw, 'holes'); % fill holes
stats = regionprops(bw, {'Centroid', 'BoundingBox'}); % connected components
% show centers and bounding boxes of each connected component
centers = vertcat(stats.Centroid);
imshow(img2), hold on
plot(centers(:,1), centers(:,2), 'LineStyle','none', ...
'Marker','x', 'MarkerSize',20, 'Color','r', 'LineWidth',3)
for i=1:numel(stats)
rectangle('Position',stats(i).BoundingBox, ...
'EdgeColor','g', 'LineWidth',3)
end
hold off
You can simplify the process proposed by @lennon310 using the normxcorr2
function:
file1='http://i.stack.imgur.com/1KyJA.jpg';
file2='http://i.stack.imgur.com/zyHuj.jpg';
It = imread(file1);
Ii = imread(file2);
It=rgb2gray(It);
Ii=rgb2gray(Ii);
It=double(It); % template
Ii=double(Ii); % image
c=normxcorr2(It, Ii);
imagesc(c);
The simple way (you don't need to write any code) - use Adaptive Vision Studio:
In summary you need to add two filters: loadImage and LocateMultipleObjects_EdgeBased and select the model to find :) It's good for beginner you don't need to write any advanced programs. You can try to solve it also by: detecting circles, TemplateMatching_NCC etc etc...
Below is presented a solution implemented in Java, using Marvin image processing framework.
Approach:
Comparison method (inside diff plug-in):
For each pixel in two logos, compare each color component. If the difference in one color component is higher then a given threshold, consider that pixel different for the two logos. Compute the total number of different pixels. If two logos have a number of different pixels higher than another threshold, consider them different. IMPORTANT: This approach is very sensitive to rotation and perspective variation.
Since your sample ("multiple image") has only coca logos, I took the liberty to include another logo in order to assert the algorithm.
The Multiple Image 2
Output
In another test, I've included two another similar coca logos. Changing the threshold parameters you can specify whether you want the exact same logo or accept its variations. In the result below, the parameters were set to accept logo variations.
The Multiple Image 3
Output
Source code
public class Logos {
private MarvinImagePlugin threshold = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.color.thresholding");
private MarvinImagePlugin fill = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.fill.boundaryFill");
private MarvinImagePlugin scale = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.transform.scale");
private MarvinImagePlugin diff = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.difference.differenceColor");
public Logos(){
// 1. Load, segment and scale the object to be found
MarvinImage target = segmentTarget();
// 2. Load the image with multiple objects
MarvinImage original = MarvinImageIO.loadImage("./res/logos/logos.jpg");
MarvinImage image = original.clone();
// 3. Segment
threshold.process(image, image);
MarvinImage image2 = new MarvinImage(image.getWidth(), image.getHeight());
fill(image, image2);
MarvinImageIO.saveImage(image2, "./res/logos/logos_fill.jpg");
// 4. Filter segments by its their masses
LinkedHashSet<Integer> objects = filterByMass(image2, 10000);
int[][] rects = getRects(objects, image2, original);
MarvinImage[] subimages = getSubimages(rects, original);
// 5. Compare the target object with each object in the other image
compare(target, subimages, original, rects);
MarvinImageIO.saveImage(original, "./res/logos/logos_out.jpg");
}
private void compare(MarvinImage target, MarvinImage[] subimages, MarvinImage original, int[][] rects){
MarvinAttributes attrOut = new MarvinAttributes();
for(int i=0; i<subimages.length; i++){
diff.setAttribute("comparisonImage", subimages[i]);
diff.setAttribute("colorRange", 30);
diff.process(target, null, attrOut);
if((Integer)attrOut.get("total") < (50*50)*0.6){
original.drawRect(rects[i][0], rects[i][6], rects[i][7], rects[i][8], 6, Color.green);
}
}
}
private MarvinImage segmentTarget(){
MarvinImage original = MarvinImageIO.loadImage("./res/logos/target.jpg");
MarvinImage target = original.clone();
threshold.process(target, target);
MarvinImage image2 = new MarvinImage(target.getWidth(), target.getHeight());
fill(target, image2);
LinkedHashSet<Integer> objects = filterByMass(image2, 10000);
int[][] rects = getRects(objects, image2, target);
MarvinImage[] subimages = getSubimages(rects, original);
return subimages[0];
}
private int[][] getRects(LinkedHashSet<Integer> objects, MarvinImage mask, MarvinImage original){
List<int[]> ret = new ArrayList<int[]>();
for(Integer color:objects){
ret.add(getObjectRect(mask, color));
}
return ret.toArray(new int[0][0]);
}
private MarvinImage[] getSubimages(int[][] rects, MarvinImage original){
List<MarvinImage> ret = new ArrayList<MarvinImage>();
for(int[] r:rects){
ret.add(getSubimage(r, original));
}
return ret.toArray(new MarvinImage[0]);
}
private MarvinImage getSubimage(int rect[], MarvinImage original){
MarvinImage img = original.subimage(rect[0], rect[1], rect[2], rect[3]);
MarvinImage ret = new MarvinImage(50,50);
scale.setAttribute("newWidth", 50);
scale.setAttribute("newHeight", 50);
scale.process(img, ret);
return ret;
}
private void fill(MarvinImage imageIn, MarvinImage imageOut){
boolean found;
int color= 0xFFFF0000;
while(true){
found=false;
Outerloop:
for(int y=0; y<imageIn.getHeight(); y++){
for(int x=0; x<imageIn.getWidth(); x++){
if(imageOut.getIntColor(x,y) == 0 && imageIn.getIntColor(x, y) != 0xFFFFFFFF){
fill.setAttribute("x", x);
fill.setAttribute("y", y);
fill.setAttribute("color", color);
fill.setAttribute("threshold", 120);
fill.process(imageIn, imageOut);
color = newColor(color);
found = true;
break Outerloop;
}
}
}
if(!found){
break;
}
}
}
private LinkedHashSet<Integer> filterByMass(MarvinImage image, int mass){
boolean found;
HashSet<Integer> analysed = new HashSet<Integer>();
LinkedHashSet<Integer> ret = new LinkedHashSet<Integer>();
while(true){
found=false;
outerLoop:
for(int y=0; y<image.getHeight(); y++){
for(int x=0; x<image.getWidth(); x++){
int color = image.getIntColor(x,y);
if(color != 0){
if(!analysed.contains(color)){
if(getMass(image, color) >= mass){
ret.add(color);
}
analysed.add(color);
found = true;
break outerLoop;
}
}
}
}
if(!found){
break;
}
}
return ret;
}
private int getMass(MarvinImage image, int color){
int total=0;
for(int y=0; y<image.getHeight(); y++){
for(int x=0; x<image.getWidth(); x++){
if(image.getIntColor(x, y) == color){
total++;
}
}
}
return total;
}
private int[] getObjectRect(MarvinImage mask, int color){
int x1=-1;
int x2=-1;
int y1=-1;
int y2=-1;
for(int y=0; y<mask.getHeight(); y++){
for(int x=0; x<mask.getWidth(); x++){
if(mask.getIntColor(x, y) == color){
if(x1 == -1 || x < x1){
x1 = x;
}
if(x2 == -1 || x > x2){
x2 = x;
}
if(y1 == -1 || y < y1){
y1 = y;
}
if(y2 == -1 || y > y2){
y2 = y;
}
}
}
}
return new int[]{x1, y1, (x2-x1), (y2-y1)};
}
private int newColor(int color){
int red = (color & 0x00FF0000) >> 16;
int green = (color & 0x0000FF00) >> 8;
int blue = (color & 0x000000FF);
if(red <= green && red <= blue){
red+=5;
}
else if(green <= red && green <= blue){
green+=5;
}
else{
blue+=5;
}
return 0xFF000000 + (red << 16) + (green << 8) + blue;
}
public static void main(String[] args) {
new Logos();
}
}
You can use correlation method to position the multiple images:
file1='http://i.stack.imgur.com/1KyJA.jpg';
file2='http://i.stack.imgur.com/zyHuj.jpg';
It = imread(file1);
Ii = imread(file2);
It=rgb2gray(It);
Ii=rgb2gray(Ii);
It=double(It); % template
Ii=double(Ii); % image
Ii_mean = conv2(Ii,ones(size(It))./numel(It),'same');
It_mean = mean(It(:));
corr_1 = conv2(Ii,rot90(It-It_mean,2),'same')./numel(It);
corr_2 = Ii_mean.*sum(It(:)-It_mean);
conv_std = sqrt(conv2(Ii.^2,ones(size(It))./numel(It),'same')-Ii_mean.^2);
It_std = std(It(:));
S = (corr_1-corr_2)./(conv_std.*It_std);
imagesc(abs(S))
The result will give you the positions with maximum values:
Get the coordinates of maxima, and position your template centroid at the same position, check the difference between your template and the matching image.
I am not sure what do you mean by "identify the border", but you can always extract the edges with canny detector:
bw=edge(It);
bw=imfill(bw,'holes');
figure,imshow(bw)