I have an ASCII table in a file from which I want to read a particular set of lines (e.g. lines 4003 to 4005). The issue is that this file could be very very long (e.g. 100
The main problem here is, that linebreaks are in no way different than any other character. So the OS has no way of skipping to that line.
That said there are a few options but for every one you have to make sacrifices in one way or another.
You did already state the first one: Use a binary file. If you have fixed line-length, then you can seek
ahead line * bytes_per_line
bytes and jump directly to that line.
The next option would be using an index: create a second file and in every line of this index file write the byte-index of the line in your datafile. Accessing the datafile now involves two seek operation (skip to line
of index, then skip to index_value
in datafile) but it will still be pretty fast. Plus: Will save diskspace because the lines can have different length. Minus: You can't touch the datafile with an editor.
One more option: (I think I would go with this) is to use only one file but begin every line with the line-number and some kind of seperator. (e.g. 4005: My data line). Now you can use a modified version of binary search https://en.wikipedia.org/wiki/Binary_search_algorithm to seek for your line. This will take around log(n)
seek operations with n being the total number of lines. Plus: You can edit the file and it saves space compared to fixed length lines. And it's still very fast. Even for one million lines this are only about 20 seek operations which happen in no time. Minus: The most complex of these posibilities. (But fun to do ;)
EDIT: One more solution: Split your file in many smaler ones. If you have very long 'lines' this could be as small as one line per file. But then I would put them in groups in folders like e.g. 4/0/05. But even with shorter lines divide your file in - let's say roughly - 1mb chunks, name them 1000.txt, 2000.txt and read the one (or two) matching your line completely should be pretty fast end very easy to implement.
I ran into a similar problem as the post above, however, the solutions posted above have problems in my particular scenario; the file was too big for linecache and islice was nowhere near fast enough. I would like to offer a third (or fourth) alternative solution.
My solution is based upon the fact that we can use mmap to access a particular point in the file. We need only know where in a file that lines begin and end, then the mmap can give those to us comparably as fast as linecache. To optimize this code (see the updates):
The following is a simple wrapper for the process:
from collections import deque
import mmap
class fast_file():
def __init__(self, file):
self.file = file
self.linepoints = deque()
self.linepoints.append(0)
pos = 0
with open(file,'r') as fp:
while True:
c = fp.read(1)
if not c:
break
if c == '\n':
self.linepoints.append(pos)
pos += 1
pos += 1
self.fp = open(self.file,'r+b')
self.mm = mmap.mmap(self.fp.fileno(),0 )
self.linepoints.append(pos)
self.linepoints = list(self.linepoints)
def getline(self, i):
return self.mm[self.linepoints[i]:self.linepoints[i+1]]
def close(self):
self.fp.close()
self.mm.close()
The caveat is that the file, mmap needs closing and the enumerating of endpoints can take some time. But it is a one-off cost. The result is something that is both fast in instantiation and in random file access, however, the output is an element of type bytes.
I tested speed by looking at accessing a sample of my large file for the first 1 million lines (out of 48mil). I ran the following to get an idea of the time took to do 10 million accesses:
linecache.getline("sample.txt",0)
F = fast_file("sample.txt")
sleep(1)
start = time()
for i in range(10000000):
linecache.getline("sample.txt",1000)
print(time()-start)
>>> 6.914520740509033
sleep(1)
start = time()
for i in range(10000000):
F.getline(1000)
print(time()-start)
>>> 4.488042593002319
sleep(1)
start = time()
for i in range(10000000):
F.getline(1000).decode()
print(time()-start)
>>> 6.825756549835205
It's not that much faster and it takes some time to initiate (longer in fact), however, consider the fact that my original file was too large for linecache. This simple wrapper allowed me to do random accesses for lines that linecache was unable to perform on my computer (32Gb of RAM).
I think this now might be an optimal faster alternative to linecache (speeds may depend on i/o and RAM speeds), but if you have a way to improve this, please add a comment and I will update the solution accordingly.
Update
I recently replaced a list with a collections.deque which is faster.
Second Update The collections.deque is faster in the append operation, however, a list is faster for random access, hence, the conversion here from a deque to a list optimizes both random access times and instantiation. I've added sleeps in this test and the decode function in the comparison because the mmap will return bytes to make the comparison fair.
I would probably just use itertools.islice. Using islice over an iterable like a file handle means the whole file is never read into memory, and the first 4002 lines are discarded as quickly as possible. You could even cast the two lines you need into a list pretty cheaply (assuming the lines themselves aren't very long). Then you can exit the with
block, closing the filehandle.
from itertools import islice
with open('afile') as f:
lines = list(islice(f, 4003, 4005))
do_something_with(lines)
But holy cow is linecache faster for multiple accesses. I created a million-line file to compare islice and linecache and linecache blew it away.
>>> timeit("x=islice(open('afile'), 4003, 4005); print next(x) + next(x)", 'from itertools import islice', number=1)
4003
4004
0.00028586387634277344
>>> timeit("print getline('afile', 4003) + getline('afile', 4004)", 'from linecache import getline', number=1)
4002
4003
2.193450927734375e-05
>>> timeit("getline('afile', 4003) + getline('afile', 4004)", 'from linecache import getline', number=10**5)
0.14125394821166992
>>> timeit("''.join(islice(open('afile'), 4003, 4005))", 'from itertools import islice', number=10**5)
14.732316970825195
This is not a practical test, but even re-importing linecache at each step it's only a second slower than islice.
>>> timeit("from linecache import getline; getline('afile', 4003) + getline('afile', 4004)", number=10**5)
15.613967180252075
Yes, linecache is faster than islice for all but constantly re-creating the linecache, but who does that? For the likely scenarios (reading only a few lines, once, and reading many lines, once) linecache is faster and presents a terse syntax, but the islice
syntax is quite clean and fast as well and doesn't ever read the whole file into memory. On a RAM-tight environment, the islice
solution may be the right choice. For very high speed requirements, linecache may be the better choice. Practically, though, in most environments both times are small enough it almost doesn't matter.