I know the question regarding PHP web page scrapers has been asked time and time and using this, I discovered SimpleHTMLDOM. After working seamlessly on my local server, I
Here's a simple way to grab images when allow_url_fopen
is set to false
, without studying up on estoteric tools.
Create a web page on your dev environment that loads all the images you're scraping. You can then use your browser to save the images. File -> "Save Page As"
.
This is handy if you need a one time solution for downloading a bunch of images from a remote server that has allow_url_fopen
set to 0
.
This worked for me after file_get_contents
and curl
failed.
If cURL
and allow_url_fopen
are not enabled you can try to fetch the content via
In other words, you have to do HTTP Requests manually. See the example in the manual for how to do a GET Request. The returned content can then be further processed. If sockets are enabled, you can also use any third party lib utilitzing them, for instance Zend_Http_Client.
On a sidenote, check out Best Methods to Parse HTML for alternatives to SimpleHTMLDom.
file_get_contents() is the simplest method to grab a page without installing extra libraries.
cURL is a specialty API. It's not the http library it's often made out to be, but a generic data transfer library for FTP,SFTP,SCP,HTTP PUT,SMTP,TELNET,etc. If you want to use just HTTP, there is an according PEAR library for that. Or check if your PHP version has the official http extension enabled. For scraping try phpQuery or querypath. Both come with builtin http support.