How do you Index Files for Fast Searches?

前端 未结 3 1719
太阳男子
太阳男子 2021-01-31 12:51

Nowadays, Microsoft and Google will index the files on your hard drive so that you can search their contents quickly.

What I want to know is how do they do this? Can yo

相关标签:
3条回答
  • 2021-01-31 13:10

    Here's a really basic description; for more details, you can read this textbook (free online): http://informationretrieval.org/¹

    1). For all files, create an index. The index consists of all unique words that occur in your dataset (called a "corpus"). With each word, a list of document ids is associated; each document id refers to a document that contains the word.

    Variations: sometimes when you generate the index you want to ignore stop words ("a", "the", etc). You have to be careful, though ("to be or not to be" is a real query composed of stopwords).

    Sometimes you also stem the words. This has more impact on search quality in non-English languages that use suffixes and prefixes to a greater extent.

    2) When a user enters a query, look up the corresponding lists, and merge them. If it's a strict boolean query, the process is pretty straightforward -- for AND, a docid has to occur in all the word lists, for OR, in at least one wordlist, etc.

    3) If you want to rank your results, there are a number of ways to do that, but the basic idea is to use the frequency with which a word occurs in a document, as compared to the frequency you expect it to occur in any document in the corpus, as a signal that the document is more or less relevant. See textbook.

    4) You can also store word positions to infer phrases, etc.

    Most of that is irrelevant for desktop search, as you are more interested in recall (all documents that include the term) than ranking.


    ¹ previously on http://www-csli.stanford.edu/~hinrich/information-retrieval-book.html, accessible via wayback machine

    0 讨论(0)
  • 2021-01-31 13:13

    The simple case is an inverted index.

    The most basic algorithm is simply:

    • scan the file for words, creating a list of unique words
    • normalize and filter the words
    • place an entry tying that word to the file in your index

    The details are where things get tricky, but the fundamentals are the same.

    By "normalize and filter" the words, I mean things like converting everything to lowercase, removing common "stop words" (the, if, in, a etc.), possibly "stemming" (removing common suffixes for verbs and plurals and such).

    After that, you've got a unique list of words for the file and you can build your index off of that.

    There are optimizations for reducing storage, techniques for checking locality of words (is "this" near "that" in the document, for example).

    But, that's the fundamental way it's done.

    0 讨论(0)
  • 2021-01-31 13:16

    You could always look into something like Apache Lucene.

    Apache Lucene is a high-performance, full-featured text search engine library written entirely in Java. It is a technology suitable for nearly any application that requires full-text search, especially cross-platform.

    0 讨论(0)
提交回复
热议问题