Recently I started using Python3 and it\'s lack of xrange hurts.
Simple example:
1) Python2:
from time import time as t
def
xrange from Python 2 is a generator and implements iterator while range is just a function. In Python3 I don't know why was dropped off the xrange.
One way to fix up your python2 code is:
import sys
if sys.version_info >= (3, 0):
def xrange(*args, **kwargs):
return iter(range(*args, **kwargs))
Python3's range is Python2's xrange. There's no need to wrap an iter around it. To get an actual list in Python3, you need to use list(range(...))
If you want something that works with Python2 and Python3, try this
try:
xrange
except NameError:
xrange = range
comp:~$ python Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2
>>> import timeit
>>> timeit.timeit("[x for x in xrange(1000000) if x%4]",number=100)
5.656799077987671
>>> timeit.timeit("[x for x in xrange(1000000) if x%4]",number=100)
5.579368829727173
>>> timeit.timeit("[x for x in range(1000000) if x%4]",number=100)
21.54827117919922
>>> timeit.timeit("[x for x in range(1000000) if x%4]",number=100)
22.014557123184204
With timeit number=1 param:
>>> timeit.timeit("[x for x in range(1000000) if x%4]",number=1)
0.2245171070098877
>>> timeit.timeit("[x for x in xrange(1000000) if x%4]",number=1)
0.10750913619995117
comp:~$ python3 Python 3.4.3 (default, Oct 14 2015, 20:28:29) [GCC 4.8.4] on linux
>>> timeit.timeit("[x for x in range(1000000) if x%4]",number=100)
9.113872020003328
>>> timeit.timeit("[x for x in range(1000000) if x%4]",number=100)
9.07014398300089
With timeit number=1,2,3,4 param works quick and in linear way:
>>> timeit.timeit("[x for x in range(1000000) if x%4]",number=1)
0.09329321900440846
>>> timeit.timeit("[x for x in range(1000000) if x%4]",number=2)
0.18501482300052885
>>> timeit.timeit("[x for x in range(1000000) if x%4]",number=3)
0.2703447980020428
>>> timeit.timeit("[x for x in range(1000000) if x%4]",number=4)
0.36209142999723554
So it seems if we measure 1 running loop cycle like timeit.timeit("[x for x in range(1000000) if x%4]",number=1) (as we actually use in real code) python3 works quick enough, but in repeated loops python 2 xrange() wins in speed against range() from python 3.
Python 3's range
type works just like Python 2's xrange
. I'm not sure why you're seeing a slowdown, since the iterator returned by your xrange
function is exactly what you'd get if you iterated over range
directly.
I'm not able to reproduce the slowdown on my system. Here's how I tested:
Python 2, with xrange
:
Python 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> import timeit
>>> timeit.timeit("[x for x in xrange(1000000) if x%4]",number=100)
18.631936646865853
Python 3, with range
is a tiny bit faster:
Python 3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:57:17) [MSC v.1600 64 bit (AMD64)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> import timeit
>>> timeit.timeit("[x for x in range(1000000) if x%4]",number=100)
17.31399508687869
I recently learned that Python 3's range
type has some other neat features, such as support for slicing: range(10,100,2)[5:25:5]
is range(20, 60, 10)
!
Some performance measurements, using timeit
instead of trying to do it manually with time
.
First, Apple 2.7.2 64-bit:
In [37]: %timeit collections.deque((x for x in xrange(10000000) if x%4 == 0), maxlen=0)
1 loops, best of 3: 1.05 s per loop
Now, python.org 3.3.0 64-bit:
In [83]: %timeit collections.deque((x for x in range(10000000) if x%4 == 0), maxlen=0)
1 loops, best of 3: 1.32 s per loop
In [84]: %timeit collections.deque((x for x in xrange(10000000) if x%4 == 0), maxlen=0)
1 loops, best of 3: 1.31 s per loop
In [85]: %timeit collections.deque((x for x in iter(range(10000000)) if x%4 == 0), maxlen=0)
1 loops, best of 3: 1.33 s per loop
Apparently, 3.x range
really is a bit slower than 2.x xrange
. And the OP's xrange
function has nothing to do with it. (Not surprising, as a one-time call to the __iter__
slot isn't likely to be visible among 10000000 calls to whatever happens in the loop, but someone brought it up as a possibility.)
But it's only 30% slower. How did the OP get 2x as slow? Well, if I repeat the same tests with 32-bit Python, I get 1.58 vs. 3.12. So my guess is that this is yet another of those cases where 3.x has been optimized for 64-bit performance in ways that hurt 32-bit.
But does it really matter? Check this out, with 3.3.0 64-bit again:
In [86]: %timeit [x for x in range(10000000) if x%4 == 0]
1 loops, best of 3: 3.65 s per loop
So, building the list
takes more than twice as long than the entire iteration.
And as for "consumes much more resources than Python 2.6+", from my tests, it looks like a 3.x range
is exactly the same size as a 2.x xrange
—and, even if it were 10x as big, building the unnecessary list is still about 10000000x more of a problem than anything the range iteration could possibly do.
And what about an explicit for
loop instead of the C loop inside deque
?
In [87]: def consume(x):
....: for i in x:
....: pass
In [88]: %timeit consume(x for x in range(10000000) if x%4 == 0)
1 loops, best of 3: 1.85 s per loop
So, almost as much time wasted in the for
statement as in the actual work of iterating the range
.
If you're worried about optimizing the iteration of a range object, you're probably looking in the wrong place.
Meanwhile, you keep asking why xrange
was removed, no matter how many times people tell you the same thing, but I'll repeat it again: It was not removed: it was renamed to range
, and the 2.x range
is what was removed.
Here's some proof that the 3.3 range
object is a direct descendant of the 2.x xrange
object (and not of the 2.x range
function): the source to 3.3 range and 2.7 xrange. You can even see the change history (linked to, I believe, the change that replaced the last instance of the string "xrange" anywhere in the file).
So, why is it slower?
Well, for one, they've added a lot of new features. For another, they've done all kinds of changes all over the place (especially inside iteration) that have minor side effects. And there'd been a lot of work to dramatically optimize various important cases, even if it sometimes slightly pessimizes less important cases. Add this all up, and I'm not surprised that iterating a range
as fast as possible is now a bit slower. It's one of those less-important cases that nobody would ever care enough to focus on. No one is likely to ever have a real-life use case where this performance difference is the hotspot in their code.