问题
Say I have a site on http://example.com. I would really like allowing bots to see the home page, but any other page need to blocked as it is pointless to spider. In other words
http://example.com & http://example.com/ should be allowed, but http://example.com/anything and http://example.com/someendpoint.aspx should be blocked.
Further it would be great if I can allow certain query strings to passthrough to the home page: http://example.com?okparam=true
but not http://example.com?anythingbutokparam=true
回答1:
So after some research, here is what I found - a solution acceptable by the major search providers: google , yahoo & msn (I could on find a validator here) :
User-Agent: *
Disallow: /*
Allow: /?okparam=
Allow: /$
The trick is using the $ to mark the end of URL.
回答2:
Google's Webmaster Tools report that disallow always takes precedence over allow, so there's no easy way of doing this in a robots.txt
file.
You could accomplish this by puting a noindex,nofollow
META
tag in the HTML every page but the home page.
回答3:
Basic robots.txt:
Disallow: /subdir/
I don't think that you can create an expression saying 'everything but the root', you have to fill in all sub directories.
The query string limitation is also not possible from robots.txt. You have to do it in the background code (the processing part), or maybe with server rewrite-rules.
回答4:
Disallow: *
Allow: index.ext
If I remember correctly the second clause should override the first.
回答5:
As far as I know, not all the crawlers support Allow tag. One possible solution might be putting everything except the home page into another folder and disallowing that folder.
来源:https://stackoverflow.com/questions/43427/how-to-set-up-a-robot-txt-which-only-allows-the-default-page-of-a-site