How to set up a robot.txt which only allows the default page of a site

家住魔仙堡 提交于 2019-12-03 08:35:24

问题


Say I have a site on http://example.com. I would really like allowing bots to see the home page, but any other page need to blocked as it is pointless to spider. In other words

http://example.com & http://example.com/ should be allowed, but http://example.com/anything and http://example.com/someendpoint.aspx should be blocked.

Further it would be great if I can allow certain query strings to passthrough to the home page: http://example.com?okparam=true

but not http://example.com?anythingbutokparam=true


回答1:


So after some research, here is what I found - a solution acceptable by the major search providers: google , yahoo & msn (I could on find a validator here) :

User-Agent: *
Disallow: /*
Allow: /?okparam=
Allow: /$

The trick is using the $ to mark the end of URL.




回答2:


Google's Webmaster Tools report that disallow always takes precedence over allow, so there's no easy way of doing this in a robots.txt file.

You could accomplish this by puting a noindex,nofollow META tag in the HTML every page but the home page.




回答3:


Basic robots.txt:

Disallow: /subdir/

I don't think that you can create an expression saying 'everything but the root', you have to fill in all sub directories.

The query string limitation is also not possible from robots.txt. You have to do it in the background code (the processing part), or maybe with server rewrite-rules.




回答4:


Disallow: *
Allow: index.ext

If I remember correctly the second clause should override the first.




回答5:


As far as I know, not all the crawlers support Allow tag. One possible solution might be putting everything except the home page into another folder and disallowing that folder.



来源:https://stackoverflow.com/questions/43427/how-to-set-up-a-robot-txt-which-only-allows-the-default-page-of-a-site

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!