I receive an html string using curl:
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$html_string = curl_exec($ch);
When I echo
it
You curl link seems have many element(large file).
And I am parsing a string(file) as large as your link and encounter this problem.
After I saw the source code, I found the problem. It works for me !
I found that simple_html_dom.php have limit the size you read.
// get html dom from string function str_get_html($str, $lowercase=true, $forceTagsClosed=true, $target_charset = DEFAULT_TARGET_CHARSET, $stripRN=true, $defaultBRText=DEFAULT_B R_TEXT, $defaultSpanText=DEFAULT_SPAN_TEXT) { $dom = new simple_html_dom(null, $lowercase, $forceTagsClosed, $target_charset, $stripRN, $defaultBRText, $defaultSpanText); if (empty($str) || strlen($str) > MAX_FILE_SIZE) { $dom->clear(); return false; } $dom->load($str, $lowercase, $stripRN); return $dom; }
you must to change the default size below (It's on the top of the simple_html_dom.php)
maybe change to 100000000 ? it's up to you.
define('MAX_FILE_SIZE', 6000000);
Did you check if the HTML is somehow encoded in a way HTML DOM PARSER doesn't expect? E.g. with HTML entities like <html>
instead of <html>
– that would still be displayed as correct HTML in your browser but wouldn't parse.
I asume that you are using curl + str_get_html instead of simply using file_get_html with the URL because of the POST parameters you need to send.
You can use this W3C validator (http://validator.w3.org/#validate_by_input+with_options) to validate the returned HTML, then, once you are sure the result is a 100% valid HTML code you can report a bug here: http://sourceforge.net/p/simplehtmldom/bugs/.