built-in range or numpy.arange: which is more efficient?
Question:
When iterating over a large array with a range expression, should I use Python’s built-in range
function, or numpy’s arange
to get the best performance?
My reasoning so far:
range
probably resorts to a native implementation and might be faster therefore. On the other hand, arange
returns a full array, which occupies memory, so there might be an overhead. Python 3’s range expression is a generator, which does not hold all the values in memory.
Answers:
First of all, as written by @bmu, you should use combinations of vectorized calculations, ufuncs and indexing. There are indeed some cases where explicit looping is required, but those are really rare.
If explicit loop is needed, with python 2.6 and 2.7, you should use xrange (see below). From what you say, in Python 3, range is the same as xrange (returns a generator). So maybe range is as good for you.
Now, you should try it yourself
(using timeit: – here the ipython “magic function”):
%timeit for i in range(1000000): pass
[out] 10 loops, best of 3: 63.6 ms per loop
%timeit for i in np.arange(1000000): pass
[out] 10 loops, best of 3: 158 ms per loop
%timeit for i in xrange(1000000): pass
[out] 10 loops, best of 3: 23.4 ms per loop
Again, as mentioned above, most of the time it is possible to use numpy vector/array formula (or ufunc etc…) which run a c speed: much faster. This is what we could call “vector programming”. It makes program easier to implement than C (and more readable) but almost as fast in the end.
For large arrays, a vectorised numpy operation is the fastest. If you must loop, prefer xrange
/range
and avoid using np.arange
.
In numpy you should use combinations of vectorized calculations, ufuncs and indexing to solve your problems as it runs at C
speed.
Looping over numpy arrays is inefficient compared to this.
(Something like the worst thing you could do would be to iterate over the array with an index created with range
or np.arange
as the first sentence in your question suggests, but I’m not sure if you really mean that.)
import numpy as np
import sys
sys.version
# out: '2.7.3rc2 (default, Mar 22 2012, 04:35:15) n[GCC 4.6.3]'
np.version.version
# out: '1.6.2'
size = int(1E6)
%timeit for x in range(size): x ** 2
# out: 10 loops, best of 3: 136 ms per loop
%timeit for x in xrange(size): x ** 2
# out: 10 loops, best of 3: 88.9 ms per loop
# avoid this
%timeit for x in np.arange(size): x ** 2
#out: 1 loops, best of 3: 1.16 s per loop
# use this
%timeit np.arange(size) ** 2
#out: 100 loops, best of 3: 19.5 ms per loop
So for this case numpy is 4 times faster than using xrange
if you do it right. Depending on your problem numpy can be much faster than a 4 or 5 times speed up.
The answers to this question explain some more advantages of using numpy arrays instead of python lists for large data sets.
First of all: range
returns an iterator, np.arange
returns a np.array
with allocated memory (as OP already mentioned). To make both approaches comparable, we need to allocate a list for the iterator:
list(range(n))
When it comes to performance, it depends: For smaller (allocated) ranges, Python’s range(...)
is faster. However, numpy
‘s np.arange(...)
scales nicely and outperforms for larger ranges:
Please find the benchmark code here. Run on MacBook Pro M1 with Python 3.11 and numpy
1.23.5
This question is very driven from a performance perspective (which is completely valid). However, I find the usability/maintainability viewpoint equally important: Importing numpy
for creating just a range would be terrible. Also, vice-versa: A heavily numpy
-driven code using range
feels wrong.
Also keep in mind, that np.arange
has a better support for non-integer step sizes.
To summarize: It depends. 🙂
When iterating over a large array with a range expression, should I use Python’s built-in range
function, or numpy’s arange
to get the best performance?
My reasoning so far:
range
probably resorts to a native implementation and might be faster therefore. On the other hand, arange
returns a full array, which occupies memory, so there might be an overhead. Python 3’s range expression is a generator, which does not hold all the values in memory.
First of all, as written by @bmu, you should use combinations of vectorized calculations, ufuncs and indexing. There are indeed some cases where explicit looping is required, but those are really rare.
If explicit loop is needed, with python 2.6 and 2.7, you should use xrange (see below). From what you say, in Python 3, range is the same as xrange (returns a generator). So maybe range is as good for you.
Now, you should try it yourself
(using timeit: – here the ipython “magic function”):
%timeit for i in range(1000000): pass
[out] 10 loops, best of 3: 63.6 ms per loop
%timeit for i in np.arange(1000000): pass
[out] 10 loops, best of 3: 158 ms per loop
%timeit for i in xrange(1000000): pass
[out] 10 loops, best of 3: 23.4 ms per loop
Again, as mentioned above, most of the time it is possible to use numpy vector/array formula (or ufunc etc…) which run a c speed: much faster. This is what we could call “vector programming”. It makes program easier to implement than C (and more readable) but almost as fast in the end.
For large arrays, a vectorised numpy operation is the fastest. If you must loop, prefer xrange
/range
and avoid using np.arange
.
In numpy you should use combinations of vectorized calculations, ufuncs and indexing to solve your problems as it runs at C
speed.
Looping over numpy arrays is inefficient compared to this.
(Something like the worst thing you could do would be to iterate over the array with an index created with range
or np.arange
as the first sentence in your question suggests, but I’m not sure if you really mean that.)
import numpy as np
import sys
sys.version
# out: '2.7.3rc2 (default, Mar 22 2012, 04:35:15) n[GCC 4.6.3]'
np.version.version
# out: '1.6.2'
size = int(1E6)
%timeit for x in range(size): x ** 2
# out: 10 loops, best of 3: 136 ms per loop
%timeit for x in xrange(size): x ** 2
# out: 10 loops, best of 3: 88.9 ms per loop
# avoid this
%timeit for x in np.arange(size): x ** 2
#out: 1 loops, best of 3: 1.16 s per loop
# use this
%timeit np.arange(size) ** 2
#out: 100 loops, best of 3: 19.5 ms per loop
So for this case numpy is 4 times faster than using xrange
if you do it right. Depending on your problem numpy can be much faster than a 4 or 5 times speed up.
The answers to this question explain some more advantages of using numpy arrays instead of python lists for large data sets.
First of all: range
returns an iterator, np.arange
returns a np.array
with allocated memory (as OP already mentioned). To make both approaches comparable, we need to allocate a list for the iterator:
list(range(n))
When it comes to performance, it depends: For smaller (allocated) ranges, Python’s range(...)
is faster. However, numpy
‘s np.arange(...)
scales nicely and outperforms for larger ranges:
Please find the benchmark code here. Run on MacBook Pro M1 with Python 3.11 and numpy
1.23.5
This question is very driven from a performance perspective (which is completely valid). However, I find the usability/maintainability viewpoint equally important: Importing numpy
for creating just a range would be terrible. Also, vice-versa: A heavily numpy
-driven code using range
feels wrong.
Also keep in mind, that np.arange
has a better support for non-integer step sizes.
To summarize: It depends. 🙂