How does Google find relevant content when it\'s parsing the web?
Let\'s say, for instance, Google uses the PHP native DOM Library to parse content. What methods would t
Tricky, but I'll take a stab:
An image (If applicable)
A < 255 paragraph from the best slice of text
Keywords that would be used for our search engine, (stack overflow style)
Meta data Keywords,Description, all images, change-log (for moderation and administration purposes)
This is a very general question but a very nice topic! Definitely upvoted :) However I am not satisfied with the answers provided so far, so I decided to write a rather lengthy answer on this.
The reason I am not satisfied is that the answers are basically all true (I especially like the answer of kovshenin (+1), which is very graph theory related...), but the all are either too specific on certain factors or too general.
It's like asking how to bake a cake and you get the following answers:
You won't be satisfied because you wan't to know what makes a good cake. And of course there are a lot or recipies.
Of course Google is the most important player, but, depending on the use case, a search engine might include very different factors or weight them differently.
For example a search engine for discovering new independent music artists may put a malus on artists websites with a lots of external links in.
A mainstream search engine will probably do the exact opposite to provide you with "relevant results".
There are (as already said) over 200 factors that are published by Google. So webmasters know how to optimize their websites. There are very likely many many more that the public is not aware of (in Google's case).
But in the very borad and abstract term SEO optimazation you can generally break the important ones apart into two groups:
How well does the answer match the question? Or: How well does the pages content match the search terms?
How popular/good is the answer? Or: What's the pagerank?
In both cases the important thing is that I am not talking about whole websites or domains, I am talking about single pages with a unique URL.
It's also important that pagerank doesn't represent all factors, only the ones that Google categorizes as Popularity. And by good I mean other factors that just have nothing to do with popularity.
In case of Google the official statement is that they want to give relevant results to the user. Meaning that all algorithms will be optimized towards what the user wants.
So after this long introduction (glad you are still with me...) I will give you a list of factors that I consider to be very important (at the moment):
Category 1 (how good does the answer match the question?
You will notice that a lot comes down to the structure of the document!
Meaning: the question words appear in the pages title text or in heading paragraphs paragraphs. The same goes for the position of theese keywords. The earlier in the page the better. Repeated often as well (if not too much which goes under the name of keywords stuffing).
The whole website deals with the topic (keywords appear in the domain/subdomain)
The words are an important topic in this page (internal links anchor texts jump to positions of the keyword or anchor texts / link texts contain the keyword).
The same goes if external links use the keywords in link text to link to this page
Category 2 (how important/popular is the page?)
You will notice that not all factors point towards this exact goal. Some are included (especially by Google) just to give pages a boost, that... well... that just deserved/earned it.
The existence of unique content that can't be found or only very little in the rest of the web gives a boost. This is mostly measured by unordered combinations of words on a website that are generally used very little (important words). But there are much more sophisticated methods as well.
Recency - newer is better
Historical change (how often the page has updated in the past. Changing is good.)
External link popularity (how many links in?)
If a page links another page the link is worth more if the page itself has a high pagerank.
basically links from different root domains, but other factors play a role too. Factors like even how seperated are the webservers of linking sites geographically (according to their ip address).
For example if big, trusted, established sites with redactional content link to you, you get a trust rank. That's why a link from The New York Times is worth much more than some strange new website, even if it's PageRank is higher!
Your whole website gives a boost to your content if your domain is trusted. Well different factors count here. Of course links from trusted sties to your domain, but it will even do good if you are in the same datacenter as important websites.
If websites that can be resolved to a topic link to you and the query can be resolved to this topic as well, it's good.
If you earned a lot of links in in a short period of time, this will do you good at this time and the near future afterwards. But not so good later in time. If you slow and steady earn links it will do you good for content that is "timeless".
A link from a .gov
domain is worth a lot.
Whats the clickrate of your search result?
Google analytics tracking, etc. It's also tracked if the user clicks back or clicks another result after opening yours.
Votes, rating, etc., references in Gmail, etc.
Now I will introduce a third category, and one or two points from above would go into this category, but I haven't thought of that... The category is:
** How important/good is your website in general **
All your pages will be ranked up a bit depending on the quality of your websites
Factors include:
Good site architecture (easy to navgite, structured. Sitemaps, etc...)
How established (long existing domains are worth more).
Hoster information (what other websites are hosted near you?
Search frequency of your exact name.
Last, but not least, I want to say that a lot of these theese factors can be enriched by semantic technology and new ones can be introduced.
For example someone may search for Titanic and you have a website about icebergs ... that can be set into correlation which may be reflected.
Newly introduced semantic identifiers. For example OWL tags may have a huge impact in the future.
For example a blog about the movie Titanic could put a sign on this page that it's the same content as on the Wikipedia article about the same movie.
This kind of linking is currently under heavy development and establishment and nobody knows how it will be used.
Maybe duplicate content is filtered, and only the most important of same content is displayed? Or maybe the other way round? That you get presented a lot of pages that match your query. Even if they dont contain your keywords?
Google even applies factors in different relevance depending on the topic of your search query!
I'd just grab the first 'paragraph' of text. The way most people write stories/problems/whatever is that they first state the most important thing, and then elaborate. If you look at any random text and you can see it makes sense most of the time.
For example, you do it yourself in your original question. If you take the first three sentences of your original question, you have a pretty good summary of what you are trying to do.
And, I just did it myself too: the gist of my comment is summarized in the first paragraph. The rest is just examples and elaborations. If you're not convinced, take a look at a few recent articles I semi-randomly picked from Google News. Ok, that last one was not semi-random, I admit ;)
Anyway, I think that this is a really simple approach that works most of the time. You can always look at meta-descriptions, titles and keywords, but if they aren't there, this might be an option.
Hope this helps.
To answer one of your questions, I am reading the following book right now, and I recommend it: Google's PageRank and Beyond, by Amy Langville and Carl Meyer.
Mildly mathematical. Uses some linear algebra in a graph theoretic context, eigenanalysis, Markov models, etc. I enjoyed the parts that talk about iterative methods for solving linear equations. I had no idea Google employed these iterative methods.
Short book, just 200 pages. Contains "asides" that diverge from the main flow of the text, plus historical perspective. Also points to other recent ranking systems.
Actually answering your question (and not just generally about search engines):
I believe going bit like Instapaper does would be the best option.
Logic behind instapaper (I didn't create it so I certainly don't know inner-workings, but it's pretty easy to predict how it works):
Find biggest bunch of text in text-like elements (relying on paragraph tags, while very elegant, won't work with those crappy sites that use div's instead of p's). Basically, you need to find good balance between block elements (divs, ps, etc.) and amount of text. Come up with some threshold: if X number of words stays undivided by markup, that text belongs to main body text. Then expand to siblings keeping the text / markup threshold of some sort.
Once you do the most difficult part — find what text belongs to actual article — it becomes pretty easy. You can find first image around that text and use it as you thumbnail. This way you will avoid ads, because they will not be that close to body text markup-wise.
Finally, coming up with keywords is the fun part. You can do tons of things: order words by frequency, remove noise (ands, ors and so on) and you have something nice. Mix that with "prominent short text element above detected body text area" (i.e. your article's heading), page title, meta and you have something pretty tasty.
All these ideas, if implemented properly, will be very bullet-proof, because they do not rely on semantic markup — by making your code complex you ensure even very sloppy-coded websites will be detected properly.
Of course, it comes with downside of poor performance, but I guess it shouldn't be that poor.
Tip: for large-scale websites, to which people link very often, you can set HTML element that contains the body text (that I was describing on point #1) manually. This will ensure correctness and speed things up.
Hope this helps a bit.
I would consider these building the code
Also:
You can also check if you can find anything useful at Google search API: http://code.google.com/intl/tr/apis/ajaxsearch/