I use jQuery to retrieve content from the database with a json request. It then replaces a wildcard in the HTML (like %title%) with the actual content. This works great and this way I can maintain my multi-language texts in a database, but Googlebot only sees the wildcards, not the actual content. I know Googlebot sees pages without javascript, but is there a way to deal with this? Thanks!
You should give this document at Google a thorough read.
It discusses how to enable Googlebot to index:
- pages where content changes depending on changing
#hashfragment
values in the URL. - pages where content changes immediately upon load but lack any special
#hashfragment
per se.
In short, you're looking at adding the <meta name="fragment" content="!">
as discussed in "step 3", and responding to special requests on the server-side by delivering back all the content all at once, that your client code otherwise would have generated after page load. These special requests are actually requests with ?_escaped_fragment_=...
in the URL, indicating to the server that it should pre-bake (my words) all of the final presentation into a single response for the Googlebot.
That said, since you'd be going through the effort of outputting filled in content for this special case, you may be better off doing that in your general case (avoiding the need to deal with Google's _escaped_fragment_
requests), with perhaps still a way to swap out your markers after page load if necessary (e.g. through the use of spans with a certain class
or id
for identifying them).
Google appears to have a near-fully or fully functional javascript-crawling bot at the time of this answer:
In 2009 Google proposed a solution for making AJAX crawlable: https://webmasters.googleblog.com/2009/10/proposal-for-making-ajax-crawlable.html
In 2015 Google deprecated the above approach: https://webmasters.googleblog.com/2015/10/deprecating-our-ajax-crawling-scheme.html
I have successfully built multiple single page applications that are correctly rendered in Google's Webmaster tools.
There are lots of resources on the web if you want to dive deeper:
Googlebot obviously doesn't render the page that it downloads. This will probably be the same behavior that other search bots employ as well.
You need to use a server side scripting or compilation solution (there are plenty to choose from, including PHP, ASP.NET, etc). This way you still keep your dynamic and i18n features and Googlebot sees your page the way you intended. Or at least do this for fundamental page attributes like the Title that you know Googlebot is evaluating, and keep the jQuery updating for not so important parts of the page.
(To be honest though, using jQuery to replace tokens after the page has downloaded is probably not the most efficient way to do things, especially when server side scripting is so easy and free).
来源:https://stackoverflow.com/questions/6248651/googlebot-doesnt-see-jquery-generated-content