PHP Get contents of webpage

久未见 提交于 2019-12-19 12:03:52

问题


So I am using the PHP Simple HTML DOM Parser to get the contents of a webpage. After I knew what I was doing was right, I still got the error that there was nothing to be found.

So here's what I am using to see if there is anything actually being caught:

<?php
include_once('simple_html_dom.php');

error_reporting(E_ALL);
ini_set('display_errors', '1');

$first_url = "http://www.transfermarkt.co.uk/en/chinese-super-league/startseite/wettbewerb_CSL.html"; // works

$html = file_get_html($first_url);
echo "<textarea>Output\n===========\n $html</textarea><br /><br />";

$second_url = "http://www.transfermarkt.co.uk/en/chinese-super-league/torschuetzen/wettbewerb_CSL.html"; // does not work?

$html = file_get_html($second_url);
echo "<textarea>Output\n===========\n $html</textarea><br />";
?>

No errors. Nothing in the 2nd textarea. The second URL does not seem to be getting scraped bt the tool... why?


回答1:


simple_php_dom.php contains:

define('MAX_FILE_SIZE', 600000);
...
if (empty($contents) || strlen($contents) > MAX_FILE_SIZE)
{
    return false;
}

The second page is over 672000 bytes, so this size check fails. Increase that constant and you should be OK.




回答2:


I tested your question it's working fine. You have to check the php memory limit it's may be the problem

increase your PHP memory limit and try again

<?php 

//use this to increase memory limit
ini_set('memory_limit', '200M');

$second_url = "http://www.transfermarkt.co.uk/en/chinese-super-league/torschuetzen/wettbewerb_CSL.html"; // does not work?

$html = file_get_contents($second_url);
echo "<textarea>Output\n===========\n $html</textarea><br />";


来源:https://stackoverflow.com/questions/16579258/php-get-contents-of-webpage

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!