is it possible to write web crawler in javascript?

后端 未结 11 576
深忆病人
深忆病人 2021-02-01 07:48

I want to crawl the page and check for the hyperlinks in that respective page and also follow those hyperlinks and capture data from the page

相关标签:
11条回答
  • 2021-02-01 08:35

    Google's Chrome team has released puppeteer on August 2017, a node library which provides a high-level API for both headless and non-headless Chrome (headless Chrome being available since 59).

    It uses an embedded version of Chromium, so it is guaranteed to work out of the box. If you want to use an specific Chrome version, you can do so by launching puppeteer with an executable path as parameter, such as:

    const browser = await puppeteer.launch({executablePath: '/path/to/Chrome'});
    

    An example of navigating to a webpage and taking a screenshot out of it shows how simple it is (taken from the GitHub page):

    const puppeteer = require('puppeteer');
    
    (async () => {
      const browser = await puppeteer.launch();
      const page = await browser.newPage();
      await page.goto('https://example.com');
      await page.screenshot({path: 'example.png'});
    
      await browser.close();
    })();
    
    0 讨论(0)
  • 2021-02-01 08:38

    There is a client side approach for this, using Firefox Greasemonkey extention. with Greasemonkey you can create scripts to be executed each time you open specified urls.

    here an example:

    if you have urls like these:

    http://www.example.com/products/pages/1

    http://www.example.com/products/pages/2

    then you can use something like this to open all pages containing product list(execute this manually)

    var j = 0;
    for(var i=1;i<5;i++)
    { 
      setTimeout(function(){
      j = j + 1;
      window.open('http://www.example.com/products/pages/ + j, '_blank');
    
    }, 15000 * i);
    

    }

    then you can create a script to open all products in new window for each product list page and include this url in Greasemonkey for that.

    http://www.example.com/products/pages/*

    and then a script for each product page to extract data and call a webservice passing data and close window and so on.

    0 讨论(0)
  • 2021-02-01 08:39

    Generally, browser JavaScript can only crawl within the domain of its origin, because fetching pages would be done via Ajax, which is restricted by the Same-Origin Policy.

    If the page running the crawler script is on www.example.com, then that script can crawl all the pages on www.example.com, but not the pages of any other origin (unless some edge case applies, e.g., the Access-Control-Allow-Origin header is set for pages on the other server).

    If you really want to write a fully-featured crawler in browser JS, you could write a browser extension: for example, Chrome extensions are packaged Web application run with special permissions, including cross-origin Ajax. The difficulty with this approach is that you'll have to write multiple versions of the crawler if you want to support multiple browsers. (If the crawler is just for personal use, that's probably not an issue.)

    0 讨论(0)
  • 2021-02-01 08:41

    We could crawl the pages using Javascript from server side with help of headless webkit. For crawling, we have few libraries like PhantomJS, CasperJS, also there is a new wrapper on PhantomJS called Nightmare JS which make the works easier.

    0 讨论(0)
  • 2021-02-01 08:45

    My typical setup is to use a browser extension with cross origin privileges set, which is injecting both the crawler code and jQuery.

    Another take on Javascript crawlers is to use a headless browser like phantomJS or casperJS (which boosts phantom's powers)

    0 讨论(0)
提交回复
热议问题