Project: Content Based Image Retrieval - Semi-supervised (manual tagging is done on images while training)
Description
I have 1000000 images in the database. The training is manual (supervised) - title and tags are provided for each image. Example: coke.jpg Title : Coke Tags : Coke, Can
Using the images and tags, I have to train the system. After training, when I give a new image (already in database/ completely new) the system should output the possible tags the image may belong to and display few images belonging to each tag. The system may also say no match found.
Questions:
1) What is mean by image fingerprint? What is the image fingerprint size expected ? (important because there will be millions of images to be inserted in database)
2) What is the field format of that fingerprint in the database ? (important because a fast search is needed … script should search in a 1M images database in less than 1 second)
3) What is the descriptors (algorithms) we use to analyze them ?
Thanks in advance
Well, this topic is very large, but here is a brief overview of a possible solution
Image fingerprints are collections of SIFT descriptors These are quantized both to reduce size, and to allow indexing
Build an inverted index of your database to allow looking up an image by quantized descriptors (you can use any full text search engine \ DB for this)
Given an image, lookup images which share a large amount of common descriptors
For those potential candidates, you should validate that the spatial arrangement of descriptors is similar enough
Some articles to get you started:
Mikulík, Andrej, et al. "Learning a fine vocabulary." Computer Vision–ECCV 2010 (2010): 1-14.
I would suggest to train SVM model on list of image features extracted from training images
- Image fingerprint: a meaningful representation of the image. You can't use the single pixels of course. The most rational way to do it is to minimise the correlation between basis. In simple words, if you take a 64x64 image probably the two pixels at the top left corner will be the same or similar. It's useless to use as input each single 64^2 pixels, you need something better. Try to have a look at what Principal Component Analysis does.
- It's entirely up to you. Extremising it, you could use a bit, that tells you whether the image is dark or not. Better, you do PCA on the image and experiment with different numbers of features (it's not always the case that more features is better)
- Whatever you want, there are a lot of algorithms you can use. I'd recommend Support Vector Machines. Easy to use and well supported. If you have a lot of different tags you probably have to tray one SVM for each tag. That may not be ideal and you may want to try something else.
来源:https://stackoverflow.com/questions/13507556/can-anyone-suggest-good-algorithms-for-cbir