Is there a way to prevent Googlebot from indexing certain parts of a page?

霸气de小男生 提交于 2019-12-23 07:27:31

问题


Is it possible to fine-tune directives to Google to such an extent that it will ignore part of a page, yet still index the rest?

There are a couple of different issues we've come across which would be helped by this, such as:

  • RSS feed/news ticker-type text on a page displaying content from an external source
  • users entering contact phone etc. details who want them visible on the site but would rather they not be google-able

I'm aware that both of the above can be addressed via other techniques (such as writing the content with JavaScript), but am wondering if anyone knows if there's a cleaner option already available from Google?

I've been doing some digging on this and came across mentions of googleon and googleoff tags, but these seem to be exclusive to Google Search Appliances.

Does anyone know if there's a similar set of tags to which Googlebot will adhere?

Edit: Just to clarify, I don't want to go down the dangerous route of cloaking/serving up different content to Google, which is why I'm looking to see if there's a "legit" way of achieving what I'd like to do here.


回答1:


What you're asking for, can't really be done, Google either takes the entire page, or none of it.

You could do some sneaky tricks though like insert the part of the page you don't want indexed in an iFrame and use robots.txt to ask Google not to index that iFrame.




回答2:


In short NO - unless you use cloaking with is discouraged by Google.




回答3:


Please check out the official documentation from here

http://code.google.com/apis/searchappliance/documentation/46/admin_crawl/Preparing.html

Go to section "Excluding Unwanted Text from the Index"

<!--googleoff: index-->
here will be skipped
<!--googleon: index-->



回答4:


Found useful resource for using certain duplicate content and not to allow index by search engine for such content.

<p>This is normal (X)HTML content that will be indexed by Google.</p>

<!--googleoff: index-->

<p>This (X)HTML content will NOT be indexed by Google.</p>

<!--googleon: index>



回答5:


At your server detect the search bot by IP using PHP or ASP. Then feed the IP addresses that fall into that list a version of the page you wish to be indexed. In that search engine friendly version of your page use the canonical link tag to specify to the search engine the page version that you do not want to be indexed.

This way the page with the content that do want to be index will be indexed by address only while the only the content you wish to be indexed will be indexed. This method will not get you blocked by the search engines and is completely safe.




回答6:


Yes definitely you can stop Google from indexing some parts of your website by creating custom robots.txt and write which portions you don't want to index like wpadmins, or a particular post or page so you can do that easily by creating this robots.txt file .before creating check your site robots.txt for example www.yoursite.com/robots.txt.




回答7:


All search engines either index or ignore the entire page. The only possible way to implement what you want is to:

(a) have two different versions of the same page

(b) detect the browser used

(c) If it's a search engine, serve the second version of your page.

This link might prove helpful.




回答8:


There are meta-tags for bots, and there's also the robots.txt, with which you can restrict access to certain directories.



来源:https://stackoverflow.com/questions/1497445/is-there-a-way-to-prevent-googlebot-from-indexing-certain-parts-of-a-page

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!