In Python 3.6, it takes longer to read a file if there are line breaks. If I have two files, one with line breaks and one without lines breaks (but otherwise they have the same
On Windows, opening in text-mode converts '\n'
characters to '\r\n'
when you write, and the reverse when you read.
So, I did some experimentation. I am on MacOS, right now, so my "native" line-ending is '\n'
, so I cooked up a similar test to yours, except use non-native, Windows line-endings:
sizeMB = 128
sizeKB = 1024 * sizeMB
with open(r'bigfile_one_line.txt', 'w') as f:
for i in range(sizeKB):
f.write('Hello World!!\t'*73) # There are roughly 73 phrases in one KB
with open(r'bigfile_newlines.txt', 'w') as f:
for i in range(sizeKB):
f.write('Hello World!\r\n'*73)
And the results:
In [4]: %%timeit
...: with open('bigfile_one_line.txt', 'r') as f:
...: text = f.read()
...:
1 loop, best of 3: 141 ms per loop
In [5]: %%timeit
...: with open('bigfile_newlines.txt', 'r') as f:
...: text = f.read()
...:
1 loop, best of 3: 543 ms per loop
In [6]: %%timeit
...: with open('bigfile_one_line.txt', 'rb') as f:
...: text = f.read()
...:
10 loops, best of 3: 76.1 ms per loop
In [7]: %%timeit
...: with open('bigfile_newlines.txt', 'rb') as f:
...: text = f.read()
...:
10 loops, best of 3: 77.4 ms per loop
Very similar to yours, and note, the performance difference disappears when I open in binary mode. OK, what if instead, I use *nix line-endings?
with open(r'bigfile_one_line_nix.txt', 'w') as f:
for i in range(sizeKB):
f.write('Hello World!\t'*73) # There are roughly 73 phrases in one KB
with open(r'bigfile_newlines_nix.txt', 'w') as f:
for i in range(sizeKB):
f.write('Hello World!\n'*73)
And the results using these new file:
In [11]: %%timeit
...: with open('bigfile_one_line_nix.txt', 'r') as f:
...: text = f.read()
...:
10 loops, best of 3: 144 ms per loop
In [12]: %%timeit
...: with open('bigfile_newlines_nix.txt', 'r') as f:
...: text = f.read()
...:
10 loops, best of 3: 138 ms per loop
Aha! The performance difference disappears! So yes, I think using non-native line-endings impacts performance, which makes sense given the behavior of text-mode.
However, I would expect all characters to be treated the same.
Well, they're not. Line breaks are special.
Line breaks aren't always represented as \n
. The reasons are a long story dating back to the early days of physical teleprinters, which I won't go into here, but where that story has ended up is that Windows uses \r\n
, Unix uses \n
, and classic Mac OS used to use \r
.
If you open a file in text mode, the line breaks used by the file will be translated to \n
when you read them, and \n
will be translated to your OS's line break convention when you write. In most programming languages, this is handled on the fly by OS-level code and pretty cheap, but Python does things differently.
Python has a feature called universal newlines, where it tries to handle all line break conventions, no matter what OS you're on. Even if a file contains a mix of \r
, \n
, and \r\n
line breaks, Python will recognize all of them and translate them to \n
. Universal newlines is on by default in Python 3 unless you configure a specific line ending convention with the newline
argument to open
.
In universal newlines mode, the file implementation has to read the file in binary mode, check the contents for \r\n
characters, and
construct a new string object with line endings translated
if it finds \r
or \r\n
line endings. If it only finds \n
endings, or if it finds no line endings at all, it doesn't need to perform the translation pass or construct a new string object.
Constructing a new string and translating line endings takes time. Reading the file with the tabs, Python doesn't have to perform the translation.
When you open a file in Python in text mode (the default), it uses what it calls "universal newlines" (introduced with PEP 278, but somewhat changed later with the release of Python 3). What universal newlines means is that regardless of what kind of newline characters are used in the file, you'll see only \n
in Python. So a file containing foo\nbar
would appear the same as a file containing foo\r\nbar
or foo\rbar
(since \n
, \r\n
and \r
are all line ending conventions used on some operating systems at some time).
The logic that provides that support is probably what causes your performance differences. Even if the \n
characters in the file are not being transformed, the code needs to examine them more carefully than it does non-newline characters.
I suspect the performance difference you see will disappear if you opened your files in binary mode where no such newline support is provided. You can also pass a newline
parameter to open
in Python 3, which can have various meanings depending on exactly what value you give. I have no idea what impact any specific value would have on performance, but it might be worth testing if the performance difference you're seeing actually matters to your program. I'd try passing newline=""
and newline="\n"
(or whatever your platform's conventional line ending is).