I have a file named file.txt
which is update by adding lines to it.
I am reading it by this code:
$fp = fopen(\"file.txt\", \"r\");
$da
PHP's file() function reads the whole file into an array. This solution requires the least amount of typing:
$data = array_slice(file('file.txt'), -5);
foreach ($data as $line) {
echo $line;
}
This function will work for REALLY large files under 4GB. The speed comes from reading a big chunk of data instead of 1 byte at a time and counting lines.
// Will seek backwards $n lines from the current position
function seekLineBackFast($fh, $n = 1){
$pos = ftell($fh);
if ($pos == 0)
return false;
$posAtStart = $pos;
$readSize = 2048*2;
$pos = ftell($fh);
if(!$pos){
fseek($fh, 0, SEEK_SET);
return false;
}
// we want to seek 1 line before the line we want.
// so that we can start at the very beginning of the line
while ($n >= 0) {
if($pos == 0)
break;
$pos -= $readSize;
if($pos <= 0){
$pos = 0;
}
// fseek returns 0 on success and -1 on error
if(fseek($fh, $pos, SEEK_SET)==-1){
fseek($fh, 0, SEEK_SET);
break;
}
$data = fread($fh, $readSize);
$count = substr_count($data, "\n");
$n -= $count;
if($n < 0)
break;
}
fseek($fh, $pos, SEEK_SET);
// we may have seeked too far back
// so we read one line at a time forward
while($n < 0){
fgets($fh);
$n++;
}
// just in case?
$pos = ftell($fh);
if(!$pos)
fseek($fh, 0, SEEK_SET);
// check that we have indeed gone back
if ($pos >= $posAtStart)
return false;
return $pos;
}
After running above function, you can just do fgets() in a loop to read each line at a time from $fh.
Here is my solution:
/**
*
* Reads N lines from a file
*
* @param type $file path
* @param type $maxLines Count of lines to read
* @param type $reverse set to true if result should be reversed.
* @return string
*/
public function readLinesFromFile($file, $maxLines, $reverse=false)
{
$lines = file($file);
if ($reverse) {
$lines = array_reverse($lines);
}
$tmpArr = array();
if ($maxLines > count($lines))
exit("\$maxLines ist größer als die Anzahl der Zeilen in der Datei.");
for ($i=0; $i < $maxLines; $i++) {
array_push($tmpArr, $lines[$i]);
}
if ($reverse) {
$tmpArr = array_reverse($tmpArr);
}
$out = "";
for ($i=0; $i < $maxLines; $i++) {
$out .= $tmpArr[$i] . "</br>";
}
return $out;
}
The most of options here suppose to read file into the memory and then work with rows. This wouldnt be a good idea, if the file too large
I think the best way is to use some OS-utility, like 'tail' in unix.
exec('tail -3 /logs/reports/2017/02-15/173606-arachni-2415.log', $output);
echo $output;
// 2017-02-15 18:03:25 [*] Path Traversal: Analyzing response ...
// 2017-02-15 18:03:27 [*] Path Traversal: Analyzing response ...
// 2017-02-15 18:03:27 [*] Path Traversal: Analyzing response ...
Opening large files with file()
can generate a large array, reserving a considerable chunk of memory.
You can reduce the memory cost with SplFileObject
since it iterates through each line.
Use the seek
method (of seekableiterator
) to fetch the last line. You should then subtract the current key value by 5.
To obtain the last line, use PHP_INT_MAX
. (Yes, this is a workaround.)
$file = new SplFileObject('large_file.txt', 'r');
$file->seek(PHP_INT_MAX);
$last_line = $file->key();
$lines = new LimitIterator($file, $last_line - 5, $last_line);
print_r(iterator_to_array($lines));
function ReadFromEndByLine($filename,$lines)
{
/* freely customisable number of lines read per time*/
$bufferlength = 5000;
$handle = @fopen($filename, "r");
if (!$handle) {
echo "Error: can't find or open $filename<br/>\n";
return -1;
}
/*get the file size with a trick*/
fseek($handle, 0, SEEK_END);
$filesize = ftell($handle);
/*don't want to get past the start-of-file*/
$position= - min($bufferlength,$filesize);
while ($lines > 0) {
if ($err=fseek($handle,$position,SEEK_END)) { /* should not happen but it's better if we check it*/
echo "Error $err: something went wrong<br/>\n";
fclose($handle);
return $lines;
}
/* big read*/
$buffer = fread($handle,$bufferlength);
/* small split*/
$tmp = explode("\n",$buffer);
/*previous read could have stored a partial line in $aliq*/
if ($aliq != "") {
/*concatenate current last line with the piece left from the previous read*/
$tmp[count($tmp)-1].=$aliq;
}
/*drop first line because it may not be complete*/
$aliq = array_shift($tmp);
$read = count($tmp);
if ( $read >= $lines ) { /*have read too much!*/
$tmp2 = array_slice($tmp,$read-$n);
/* merge it with the array which will be returned by the function*/
$lines = array_merge($tmp2,$lines);
/* break the cycle*/
$lines = 0;
} elseif (-$position >= $filesize) { /* haven't read enough but arrived at the start of file*/
//get back $aliq which contains the very first line of the file
$lines = array_merge($aliq,$tmp,$lines);
//force it to stop reading
$lines = 0;
} else { /*continue reading...*/
//add the freshly grabbed lines on top of the others
$lines = array_merge($tmp,$lines);
$lines -= $read;
//next time we want to read another block
$position -= $bufferlength;
//don't want to get past the start of file
$position = max($position, -$filesize);
}
}
fclose($handle);
return $lines;
}
This will be fast for larger files but alot of code for a simple task, if there LARGE FILES, use this
ReadFromEndByLine('myFile.txt',6);