I have a PHP-script that loads page-content from another website by using CURL and simple_html_dom PHP library. This works great. If I echo out the HTML returned I can see the d
Yes its piece of cake if you are interested only in that particular html which is returned by ajax.
Unfortunately Javascript is run client-side, in a browser, so unless the page is loaded in a web browser there is no simple way to do it.
The only way I can think of, is having a browser running in a server’s background, reloading and saving the generated page automatically in a file which will be available for a PHP script to fetch. Well... I don’t know about anyone who has implemented such an idea.
Better try to get the URL where the div is being populated from. If the div contents are generated through AJAX for example, maybe if you fetch the data-origin URL with cURL, the data will be available for you as well.
For this kind of screen scraping you could try phpQuery or Snoopy.
phpQuery has a web browser plugin and scoopy claims to simulate one
you can always bind to the event that is fired when the xhr returns data to the browser and do your operations there.
var xhReq = createXMLHttpRequest();
xhReq.open("GET", "ur_php_url.php");
xhReq.onreadystatechange = onResponse;
xhReq.send(null);
function onResponse()
{
// do the necessary
}
Yes, it is possible.
What you need to do is the following:
ex. Say you want to get the content of http://www.domain.com/page.html and this page.html retrieves some other data using Ajax, say $("#div").load("http://www.domain.com/ajax/data.php?time=48484&c=487387").
What you will do is to make a CURL request to page.html first, and get the full URL of the Ajax call using preg_match() PHP function or any equivalent function in any other language. After that, create another CURL request to that URL - http://www.domain.com/ajax/data.php?time=48484&c=487387 - and get its content.
You're all set!