How can Perl's WWW::Mechanize expand HTML pages that add to themselves with JavaScript?

后端 未结 6 468
无人共我
无人共我 2020-12-06 08:24

As mentioned in a previous question, I\'m coding a crawler for the QuakeLive website.
I\'ve been using WWW::Mechanize to get the web content and this worked fine for a

相关标签:
6条回答
  • 2020-12-06 09:05

    You should be able to use WWW::HtmlUnit - it loads and executes javascript.

    0 讨论(0)
  • 2020-12-06 09:21

    It looks like they are using AJAX. I can see where the requests are being sent using FireBug. You may need to either pick up on this by trying to parse and execute javasript that affects the DOM.

    0 讨论(0)
  • 2020-12-06 09:22

    Read the FAQ. WWW::Mechanize doesn't do javascript. They're probably using javascript to change the page. You'll need a different approach.

    0 讨论(0)
  • 2020-12-06 09:24

    To get at the DOM containing those IDs you'll probably have to execute the javascript code on that site. I'm not aware of any libraries that'd allow you to do that, and then introspect the resulting DOM within perl, so just controlling an actual browser and later asking it for the DOM, or only parts of it, seems like a good way to go about this.

    Various browsers provide ways to be controlled programatically. With a Mozilla based browser, such as Firefox, this could be as easy as loading mozrepl into the browser, opening a socket from perl space, sending a few lines of javascript code over to actually load that page, and then some more javascript code to give you the parts of the DOM you're interested in back. The result of that you could then parse with one of the many JSON modules on CPAN.

    Alternatively, you could work through the javascript code executed on your page and figure out what it actually does, to then mimic that in your crawler.

    0 讨论(0)
  • 2020-12-06 09:26

    The problem is that mechanize mimics the networking layer of the browser but not the rendering or javascript execution layer.

    Many folks use the web browser control provided by Microsoft. This is a full instance of IE in a control that you can host in a WinForm, WPF or plain old Console app. It allows you to, among other things, load the web page and run javascript as well as send and receive javascript commands.

    Here's a reasonable intro into how to host a browser control: http://www.switchonthecode.com/tutorials/csharp-snippet-tutorial-the-web-browser-control

    0 讨论(0)
  • 2020-12-06 09:28

    A ton of data is sent over ajax requests. You need to account for that in your crawler somehow.

    0 讨论(0)
提交回复
热议问题