I commonly work with text files of ~20 Gb size and I find myself counting the number of lines in a given file very often.
The way I do it now it\'s just cat fn
I have a 645GB text file, and none of the earlier exact solutions (e.g. wc -l
) returned an answer within 5 minutes.
Instead, here is Python script that computes the approximate number of lines in a huge file. (My text file apparently has about 5.5 billion lines.) The Python script does the following:
A. Counts the number of bytes in the file.
B. Reads the first N
lines in the file (as a sample) and computes the average line length.
C. Computes A/B as the approximate number of lines.
It follows along the line of Nico's answer, but instead of taking the length of one line, it computes the average length of the first N
lines.
Note: I'm assuming an ASCII text file, so I expect the Python len()
function to return the number of chars as the number of bytes.
Put this code into a file line_length.py
:
#!/usr/bin/env python
# Usage:
# python line_length.py <filename> <N>
import os
import sys
import numpy as np
if __name__ == '__main__':
file_name = sys.argv[1]
N = int(sys.argv[2]) # Number of first lines to use as sample.
file_length_in_bytes = os.path.getsize(file_name)
lengths = [] # Accumulate line lengths.
num_lines = 0
with open(file_name) as f:
for line in f:
num_lines += 1
if num_lines > N:
break
lengths.append(len(line))
arr = np.array(lengths)
lines_count = len(arr)
line_length_mean = np.mean(arr)
line_length_std = np.std(arr)
line_count_mean = file_length_in_bytes / line_length_mean
print('File has %d bytes.' % (file_length_in_bytes))
print('%.2f mean bytes per line (%.2f std)' % (line_length_mean, line_length_std))
print('Approximately %d lines' % (line_count_mean))
Invoke it like this with N
=5000.
% python line_length.py big_file.txt 5000
File has 645620992933 bytes.
116.34 mean bytes per line (42.11 std)
Approximately 5549547119 lines
So there are about 5.5 billion lines in the file.