How to scrape websites when cURL and allow_url_fopen is disabled

后端 未结 4 1394
借酒劲吻你
借酒劲吻你 2020-12-06 22:21

I know the question regarding PHP web page scrapers has been asked time and time and using this, I discovered SimpleHTMLDOM. After working seamlessly on my local server, I

相关标签:
4条回答
  • 2020-12-06 22:54

    Here's a simple way to grab images when allow_url_fopen is set to false, without studying up on estoteric tools.

    Create a web page on your dev environment that loads all the images you're scraping. You can then use your browser to save the images. File -> "Save Page As".

    This is handy if you need a one time solution for downloading a bunch of images from a remote server that has allow_url_fopen set to 0.

    This worked for me after file_get_contents and curl failed.

    0 讨论(0)
  • 2020-12-06 22:57

    If cURL and allow_url_fopen are not enabled you can try to fetch the content via

    • fsockopen — Open Internet or Unix domain socket connection

    In other words, you have to do HTTP Requests manually. See the example in the manual for how to do a GET Request. The returned content can then be further processed. If sockets are enabled, you can also use any third party lib utilitzing them, for instance Zend_Http_Client.

    On a sidenote, check out Best Methods to Parse HTML for alternatives to SimpleHTMLDom.

    0 讨论(0)
  • 2020-12-06 23:08

    file_get_contents() is the simplest method to grab a page without installing extra libraries.

    0 讨论(0)
  • 2020-12-06 23:10

    cURL is a specialty API. It's not the http library it's often made out to be, but a generic data transfer library for FTP,SFTP,SCP,HTTP PUT,SMTP,TELNET,etc. If you want to use just HTTP, there is an according PEAR library for that. Or check if your PHP version has the official http extension enabled. For scraping try phpQuery or querypath. Both come with builtin http support.

    0 讨论(0)
提交回复
热议问题