How to crawl with php Goutte and Guzzle if data is loaded by Javascript?

南楼画角 提交于 2019-12-19 02:43:04

问题


Many times when crawling we run into problems where content that is rendered on the page is generated with Javascript and therefore scrapy is unable to crawl for it (eg. ajax requests, jQuery)


回答1:


You want to have a look at phantomjs. There is this php implementation:

http://jonnnnyw.github.io/php-phantomjs/

if you need to have it working with php of course.

You could read the page and then feed the contents to Guzzle, in order to use the nice functions that Guzzle gives you (like search for contents, etc...). That would depend on your needs, maybe you can simply use the dom, like this:

How to get element by class name?

Here is some working code.

  $content = $this->getHeadlessReponse($url);
  $this->crawler->addContent($this->getHeadlessReponse($url));

  /**
   * Get response using a headless browser (phantom in this case).
   *
   * @param $url
   *   URL to fetch headless
   *
   * @return string
   *   Response.
   */
public function getHeadlessReponse($url) {
    // Fetch with phamtomjs
    $phantomClient = PhantomClient::getInstance();
    // and feed into the crawler.
    $request = $phantomClient->getMessageFactory()->createRequest($url, 'GET');

    /**
     * @see JonnyW\PhantomJs\Http\Response
     **/
    $response = $phantomClient->getMessageFactory()->createResponse();

    // Send the request
    $phantomClient->send($request, $response);

    if($response->getStatus() === 200) {
        // Dump the requested page content
        return $response->getContent();
    }

}

Only disadvantage of using phantom, it will be slower than guzzle, but of course, you have to wait for all those pesky js to be loaded.




回答2:


Guzzle (which Goutte uses internally) is an HTTP client. As a result, javascript content will not be parsed or executed. Javascript files which reside outside of the requested endpoint will not be downloaded.

Depending upon your environment, I suppose it would be possible to utilize PHPv8 (a PHP extension that embeds the Google V8 javascript engine) and a custom handler / middleware to perform what you want.

Then again, depending on your environment, it might be easier to simply perform the scraping with a javascript client.




回答3:


I would recommend to try getting response content. Parse it (if you have to) to new html and use it as $html when initialing new Crawler object, after that you can use all data in response like any other Crawler object.

$crawler = $client->submit($form);
$html = $client->getResponse()->getContent();
$newCrawler = new Crawler($html);



回答4:


Since it is impossible to work with javascript, I can suggest another solution:

GOOGLE CHROME > Right button > Inspect Element > Right button > edit as html > copy > work with copied html

        $html = $the_copied_html;
        $crawler = new Crawler($html);

        $data = $crawler->filter('.your-selector')->each(function (Crawler $node, $i) { 
                return [
                    'text' => $node->text()
                ];
        });

        //Do whatever you want with the $data
        return $data; //type Array

This will only work for single jobs and not automated processes. In my case this will do it.



来源:https://stackoverflow.com/questions/36673638/how-to-crawl-with-php-goutte-and-guzzle-if-data-is-loaded-by-javascript

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!