How does Google use HTML tags to enhance the search engine?

前端 未结 14 3493
夕颜
夕颜 2021-02-20 09:47

I know that Google’s search algorithm is mainly based on pagerank. However, it also does analysis and uses the structure of the document H1, H2,

14条回答
  •  刺人心
    刺人心 (楼主)
    2021-02-20 10:31

    In short; very carefully. In long:

    Quote from anatomy of a large-scale hypertextual erb search engine:

    [...] This gives us some limited phrase searching as long as there are not that many anchors for a particular word. We expect to update the way that anchor hits are stored to allow for greater resolution in the position and docIDhash fields. We use font size relative to the rest of the document because when searching, you do not want to rank otherwise identical documents differently just because one of the documents is in a larger font. [...]

    It goes on:

    [...] Another big difference between the web and traditional well controlled collections is that there is virtually no control over what people can put on the web. Couple this flexibility to publish anything with the enormous influence of search engines to route traffic and companies which deliberately manipulating search engines for profit become a serious problem. This problem that has not been addressed in traditional closed information retrieval systems. Also, it is interesting to note that metadata efforts have largely failed with web search engines, because any text on the page which is not directly represented to the user is abused to manipulate search engines. [...]

    The Challenges in a web search engine addresses these issues in a more modern fashion:

    [...] Web pages in HTML fall into the middle of this continuum of structure in documents, being neither close to free text nor to well-structured data. Instead HTML markup provides limited structural information, typically used to control layout but providing clues about semantic information. Layout information in HTML may seem of limited utility, especially compared to information contained in languages like XML that can be used to tag content, but in fact it is a particularly valuable source of meta-data in unreliable corpora such as the web. The value in layout information stems from the fact that it is visible to the user [...]:

    And adds:

    [...] HTML tags can be analyzed for what semantic information can be inferred. In addition to the header tags mentioned above, there are tags that control the font face (bold, italic), size, and color. These can be analyzed to determine which words in the document the author thinks are particularly important. One advantage of HTML, or any markup language that maps very closely to how the content is displayed, is that there is less opportunity for abuse: it is difficult to use HTML markup in a way that encourages search engines to think the marked text is important, while to users it appears unimportant. For instance, the fixed meaning of the tag means that any text in an HI context will appear prominently on the rendered web page, so it is safe for search engines to weigh this text highly. However, the reliability of HTML markup is decreased by Cascading Style Sheets which separate the names of tags from their representation. There has been research in extracting information from what structure HTML does possess.For instance, [Chakrabarti etal, 2001; Chakrabarti, 2001] created a DOM tree of an HTML page and used this information to in-crease the accuracy of topic distillation, a link-based analysis technique.

    There are number of issues a modern search engine needs to combat, for example web spam and blackhat SEO schemes.

    • Combating webspam with trustrank
    • Webspam taxonomy
    • Detecting spam web pages through content analysis

    But even in a perfect world, e.g. after eliminating the bad apples from the index, the web is still an utter mess because no-one has identical structures. There are maps, games, video, photos (flickr) and lots and lots of user generated content. In other word, the web is still very unpredictable.

    Resources

    • Hypertext and the web:
      • Extracting knowledge from the World Wide Web
      • Rich media and web 2.0
      • Thresher: automating the unwrapping of semantic content from the World Wide Web
      • Information retrieval
    • Webspam papers
      • Combating webspam with trustrank
      • Webspam taxonomy
      • Detecting spam web pages through content analysis

提交回复
热议问题