How do search engines deal with AngularJS applications?

后端 未结 15 1219
予麋鹿
予麋鹿 2020-11-22 06:35

I see two issues with AngularJS application regarding search engines and SEO:

1) What happens with custom tags? Do search engines ignore the whole content within tho

相关标签:
15条回答
  • 2020-11-22 07:21

    This has drastically changed.

    http://searchengineland.com/bing-offers-recommendations-for-seo-friendly-ajax-suggests-html5-pushstate-152946

    If you use: $locationProvider.html5Mode(true); you are set.

    No more rendering pages.

    0 讨论(0)
  • 2020-11-22 07:22

    The crawlers do not need a rich featured pretty styled gui, they only want to see the content, so you do not need to give them a snapshot of a page that has been built for humans.

    My solution: to give the crawler what the crawler wants:

    You must think of what do the crawler want, and give him only that.

    TIP don't mess with the back. Just add a little server-sided frontview using the same API

    0 讨论(0)
  • 2020-11-22 07:27

    You should really check out the tutorial on building an SEO-friendly AngularJS site on the year of moo blog. He walks you through all the steps outlined on Angular's documentation. http://www.yearofmoo.com/2012/11/angularjs-and-seo.html

    Using this technique, the search engine sees the expanded HTML instead of the custom tags.

    0 讨论(0)
  • 2020-11-22 07:30

    As of now Google has changed their AJAX crawling proposal.

    Times have changed. Today, as long as you're not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers.

    tl;dr: [Google] are no longer recommending the AJAX crawling proposal [Google] made back in 2009.

    0 讨论(0)
  • 2020-11-22 07:30

    Crawlers (or bots) are designed to crawl HTML content of web pages but due to AJAX operations for asynchronous data fetching, this became a problem as it takes sometime to render page and show dynamic content on it. Similarly, AngularJS also use asynchronous model, which creates problem for Google crawlers.

    Some developers create basic html pages with real data and serve these pages from server side at the time of crawling. We can render same pages with PhantomJS on serve side which has _escaped_fragment_ (Because Google looks for #! in our site urls and then takes everything after the #! and adds it in _escaped_fragment_ query parameter). For more detail please read this blog .

    0 讨论(0)
  • 2020-11-22 07:31

    Update May 2014

    Google crawlers now executes javascript - you can use the Google Webmaster Tools to better understand how your sites are rendered by Google.

    Original answer
    If you want to optimize your app for search engines there is unfortunately no way around serving a pre-rendered version to the crawler. You can read more about Google's recommendations for ajax and javascript-heavy sites here.

    If this is an option I'd recommend reading this article about how to do SEO for Angular with server-side rendering.

    I’m not sure what the crawler does when it encounters custom tags.

    0 讨论(0)
提交回复
热议问题