Im building single page app using Angular.js, My question is how to make application crawlable because routing is handled using ng-view on client side and server just return si
Since October 2015 you don't need to do anything in order to make your application crawlable (I assume you're referring to Google crawling).
Check this article :
https://webmasters.googleblog.com/2015/10/deprecating-our-ajax-crawling-scheme.html
The only working solution I know is the one the core AngularJS team uses for its documentation website.
_escaped_fragment_
in the query string.This was mentioned by the core developers in the AngularJS Google group. [1] [2] [3]
Also from the rest of the threads there I think they are using PhantomJS and NodeJS to parse the pages.
[1] https://groups.google.com/d/msg/angular/yClOeqR5DGc/4YXGx9z8EpAJ
[2] https://groups.google.com/d/msg/angular/EGwg49uAmMI/j-kj9nytT-IJ
[3] https://groups.google.com/d/msg/angular/EGwg49uAmMI/j-kj9nytT-IJ
I came across this service that might be worth checking out. It runs a PhantomJS server and does all the legwork for you.
I implemented crawling in my site using above all points and below link https://developers.google.com/webmasters/ajax-crawling/
http://www.yearofmoo.com/2012/11/angularjs-and-seo.html
http://india-elections.in
Created Static template using PhantomJs
Making a single page app Crawl able yet interactive is not a straight forward task. You have to think about access points from the UX perspectives that will allow the back button, and jump in access. When the back button is pressed, for instance, marks for object states needed to be recreated on the server without user interaction generating the same markup as the usage to get to that access point would create on the client. Phantom.js can be used for this task, or client/server agnostic js can be used to run on both ends, or like in the good ol php days the entire logic to replicate the state of the access point can be re-written for the server. @Ajay Beniwal has detailed some links on how to create html snapshots.
Assuming you have a webserver that can throw out bootstrapping markup given a particular object state. The state can be supplied via a state identifier, this needs be the url to make your code crawlable. Libraries like Angular js and Backbone.js supply mechanisms like the Backbone.Router, which in turn either use link fragments or HTML5 pushState() method to store the state identifier on the client. The beauty of HTML5 is however that a refresh makes a straight call for the right object state to the server without having to load an initial page that parses the hash supplied and redirects to the proper object state url, tho there is no other option for old browsers, architecting your application around the HTML5 paradigm will make them a cake for crawlers, and most implementations of HTML5 pushState such as Backbone.Router degrade gracefully in to hash tag state marking implementations for older browsers to still allow the back button.