Why Python is so slow for a simple for loop?

Question:

We are making some kNN and SVD implementations in Python. Others picked Java. Our execution times are very different. I used cProfile to see where I make mistakes but everything is quite fine actually. Yes, I use numpy also. But I would like to ask simple question.

total = 0.0
for i in range(9999): # xrange is slower according 
    for j in range(1, 9999):            #to my test but more memory-friendly.
        total += (i / j)
print total

This snippet takes 31.40s on my computer.

Java version of this code takes 1 second or less on the same computer. Type checking is a main problem for this code, I suppose. But I should make lots of operation like this for my project and I think 9999*9999 is not so big number.

I think I am making mistakes because I know Python is used by lots of scientific projects. But why is this code so slow and how can I handle problems bigger than this?

Should I use a JIT compiler such as Psyco?

EDIT

I also say that this loop problem is only an example. The code is not as simple as like this and It may be tough to put into practice your improvements/code samples.

Another question is that can I implement lots of data mining & machine learning algorithms with numpy and scipy if I use it correctly?

Asked By: Baskaya

||

Answers:

This is a known phenomenon — python code is dynamic and interpreted, java code is statically typed and compiled. No surprises there.

The reasons people give for preferring python are often:

  • smaller code base
  • less redundancy (more DRY)
  • cleaner code

However, if you use a library written in C (from python), the performance may be much better (compare: pickle to cpickle).

Answered By: Matt Fenwick

Because you mention scientific code, have a look at numpy. What you’re doing has probably already been done (or rather, it uses LAPACK for things like SVD). When you hear about python being used for scientific code, people probably aren’t referring to using it in the way you do in your example.

As a quick example:

(If you’re using python3, your example would use float division. My example assumes you’re using python2.x, and therefore integer division. If not, specify i = np.arange(9999, dtype=np.float), etc)

import numpy as np
i = np.arange(9999)
j = np.arange(1, 9999)
print np.divide.outer(i,j).sum()

To give some idea of timing… (I’ll use floating point division here, instead of integer division as in your example):

import numpy as np

def f1(num):
    total = 0.0
    for i in range(num): 
        for j in range(1, num):
            total += (float(i) / j)
    return total

def f2(num):
    i = np.arange(num, dtype=np.float)
    j = np.arange(1, num, dtype=np.float)
    return np.divide.outer(i, j).sum()

def f3(num):
    """Less memory-hungry (and faster) version of f2."""
    total = 0.0
    j = np.arange(1, num, dtype=np.float)
    for i in xrange(num):
        total += (i / j).sum()
    return total

If we compare timings:

In [30]: %timeit f1(9999)
1 loops, best of 3: 27.2 s per loop

In [31]: %timeit f2(9999)
1 loops, best of 3: 1.46 s per loop

In [32]: %timeit f3(9999)
1 loops, best of 3: 915 ms per loop
Answered By: Joe Kington

You will find that list comprehensions or generator expressions are significantly faster. For example:

total = sum(i / j for j in xrange(1, 9999) for i in xrange(9999))

This executes in ~11 seconds on my machine vs. ~26 for your original code. Still an order of magnitude slower than the Java, but that’s more in line with what you’d expect.

Your original code can, by the way, be sped up slightly by initializing total to 0 rather than 0.0 to use integer rather than floating-point addition. Your divisions all have integer results, so there is no point in summing the results to a float.

On my machine, Psyco actually slows down the generator expressions to about the same speed as your original loop (which it does not accelerate at all).

Answered By: kindall

Why is Java faster than Python on this example loops?

Novice Explanation: Think of a program like a freight train that lays its own train-track as it moves forward. Track must be laid before the train can move. The Java Freight train can send thousands of track-layers ahead of the train, all working in parallel laying track many miles in advance, wheras python can only send one laboror at a time, and can only lay track 10 feet in front of where the train is.

Java has strong types and that affords the compiler to use JIT features: (https://en.wikipedia.org/wiki/Just-in-time_compilation) which enable the CPU to fetch memory and execute instructions in the future in parallel, before the instruction is needed. Java can ‘sort of’ run the instructions in your for loop in parallel with itself. Python has no concrete types and so the nature of the work to be done has to be decided at every instruction. This causes your entire computer to stop and wait for all the memory in all of your variables to be re-scanned. Meaning loops in python are polynomial O(n^2) time, wheras Java loops can be, and often are linear time O(n), due to strong types.

I think I am making mistakes because I know Python is used by lots of scientific projects.

They’re heavily using SciPy (NumPy being the most prominent component, but I’ve heard the ecosystem that developed around NumPy’s API is even more important) which vastly speeds up all kinds operations these projects need. There’s what you are doing wrong: You aren’t writing your critical code in C. Python is great for developing in general, but well-placed extension modules are a vital optimization in its own right (at least when you’re crunching numbers). Python is a really crappy language to implement tight inner loops in.

The default (and for the time being most popular and widely-supported) implementation is a simple bytecode interpreter. Even the simplest operations, like an integer division, can take hundreds of CPU cycles, multiple memory accesses (type checks being a popular example), several C function calls, etc. instead of a few (or even single, in the case of integer division) instruction. Moreover, the language is designed with many abstractions which add overhead. Your loop allocates 9999 objects on the heap if you use xrange – far more if you use range (99999999 integer minus around 256256 for small integers which are cached). Also, the xrange version calls a method on each iteration to advance – the range version would too if iteration over sequences hadn’t been optimized specifically. It still takes a whole bytecode dispatch though, which is itself vastly complex (compared to an integer division, of course).

It would be interesting to see what a JIT (I’d recommend PyPy over Psyco, the latter isn’t actively developed anymore and very limited in scope anyway – it might work well for this simple example though). After a tiny fraction of iterations, it should produce a nigh-optimal machine code loop augmented with a few guards – simple integer comparisions, jumping if they fail – to maintain correctness in case you got a string in that list. Java can do the same thing, only sooner (it doesn’t have to trace first) and with fewer guards (at least if you use ints). That’s why it’s so much faster.

Answered By: user395760

Using kindall’s list comprehension

total = sum(i / j for j in xrange(1, 9999) for i in xrange(9999))

is 10.2 seconds and using pypy 1.7 it is 2.5 seconds. It is funny because pypy speeds up original version to 2.5 seconds also. So for pypy list comprehensions would be premature optimization ;). Good job pypy!

Answered By: Pawel

Python for loops are statically typed and interpreted. Not compiled. Java is faster because it has extra JIT acceleration features that Python does not have.

http://en.wikipedia.org/wiki/Just-in-time_compilation

To illustrate just how massive a difference Java JIT makes, look at this python program that takes about 5 minutes:

if __name__ =='__main__':
    total = 0.0
    i=1
    while i<=9999:
        j=1
        while j<=9999:
            total=1
            j+=1
        i+=1
    print total

While this fundamentally equivalent Java program takes about 23 milliseconds:

public class Main{
    public static void main(String args[]){
        float total = 0f; 

        long start_time = System.nanoTime();
        int i=1;

        while (i<=9999){
            int j=1;
            while(j<=9999){
                total+=1;
                j+=1;
            }
            i+=1;
        }
        long end_time = System.nanoTime();

        System.out.println("total: " + total);
        System.out.println("total milliseconds: " + 
           (end_time - start_time)/1000000);
    }
}

In terms of doing anything in a for loop, Java cleans python’s clock by being between 1 and 1000 orders of magnitude faster.

Moral of the story: basic python for loops should be avoided at all costs if speedy performance is required. This could be because Guido van Rossum wants to encourage people to use multi-processor friendly constructs like array splicing, which operate faster than Java.

Answered By: spiderman

Doing scientific calculations with python often means using some calculation software written in C/C++ in the most crucial parts, with python as internal script language, as e.x. Sage (which contains also a lot of python code, too).

I think that this may be useful:
http://blog.dhananjaynene.com/2008/07/performance-comparison-c-java-python-ruby-jython-jruby-groovy/

As you can see, psyco/PyPy can bring a certain improvement, but still, it probably would be much slower than C++ or Java.

Answered By: Krystian

The benefit of Python is that there is a lot more flexibility (e.g. classes are objects) compared to Java (where you only have this reflection mechanism)

What’s not mentioned here is Cython. It allows to introduce typed variables and trans-compile your example to C/C++. Then it’s much faster. I’ve also changed the bounds in the loop …

from __future__ import division

cdef double total = 0.00
cdef int i, j
for i in range(9999):
    for j in range(1, 10000+i):
        total += (i / j)

from time import time
t = time()
print("total = %d" % total)
print("time = %f[s]" % (time() - t))

Followed by

$ cython loops.pyx
$ gcc -I/usr/include/python2.7 -shared -pthread -fPIC -fwrapv -Wall -fno-strict-aliasing -O3 -o loops.so loops.c
$ python -c "import loops"

gives

total = 514219068
time = 0.000047[s]
Answered By: Harald Schilly

Not sure if the recomendation has been made, but I like replacing for loops with list comprehension. Its faster, cleaner, and more pythonic.

http://www.pythonforbeginners.com/basics/list-comprehensions-in-python

Answered By: Jesse

I think NumPy can be faster than CPython for loops (I didn’t test in PyPy).

I want to start from Joe Kington’s code because this answer used NumPy.

%timeit f3(9999)
704 ms ± 2.33 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

by myself:

def f4(num):
    x=np.ones(num-1)
    y=np.arange(1,num)
    return np.sum(np.true_divide(x,y))*np.sum(y)

155 µs ± 284 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In addition, High School Mathematics can simplify the problem to computer.

Problem= (1+2+...+(num-1)) * (1/1+1/2+...+1/(num-1))
1+2+...+(num-1)=np.sum(np.arange(1,num))=num*(num-1)/2
1/1+1/2+...+1/(num-1)=np.true_divide (1,y)=np.reciprocal(y.astype(np.float64))

Therefore,

def f5(num):
    return np.sum(np.reciprocal(np.arange(1, num).astype(np.float64))) * num*(num-1)/2
%timeit f5(9999)
106 µs ± 615 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In addition, University Mathematics can simplify the problem to computer more.

1/1+1/2+...+1/(num-1)=np.log(num-1)+1/(2*num-2)+np.euler_gamma
(n>2)

np.euler_gamma: Euler-Mascheroni constant (0.57721566…)

Because of inaccuracy of Euler-Mascheroni constant in NumPy, You lose accuracy like
489223499.9991845 -> 489223500.0408554.
If You can ignore 0.0000000085% inaccuracy, You can save more time.

def f6(num):
    return (np.log(num-1)+1/(2*num-2)+np.euler_gamma)* num*(num-1)/2
%timeit f6(9999)
4.82 µs ± 29.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

Benefit of NumPy becomes larger with larger input.

%timeit f3(99999)
56.7 s ± 590 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit f5(99999)
534 µs ± 86.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit f5(99999999)
1.42 s ± 15.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
9.498947911958**416**e+16
%timeit f6(99999999)
4.88 µs ± 26.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
9.498947911958**506**e+16
%timeit f6(9999999999999999999)
17.9 µs ± 921 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In special case, You can use numba (Unfortunately not always).

from numba import jit
@jit
def f7(num):
    return (np.log(num-1)+1/(2*num-2)+np.euler_gamma)* num*(num-1)/2
# same code with f6(num)

%timeit f6(999999999999999)
5.63 µs ± 29.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
f7(123) # compile f7(num)
%timeit f7(999999999999999)
331 ns ± 1.9 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit f7(9999)
286 ns ± 3.09 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

So, I recommend to use NumPy, mathematics and numba together.

Answered By: Mason Ji Ming

If you use While Loops instead of For Loops the execution will be much much faster (tested in Python 3). It will run as faster as a compiled C program that do the same thing.
Try the following examples (MIPS calculation is only indicative because does not consider the processor’s architecture etc. etc.):


Python 3 Program


import time


N=100
i=0
j=0

StartTime=time.time()
while j<N:
    j=j+1
    while i<1000000:
        a=float(i)/float(j)
        i=i+1
EndTime=time.time()

DeltaTime=(EndTime-StartTime) # time in seconds


MIPS=(1/DeltaTime)*N



print("This program estimates the MIPS that your computational unit can perform")
print("------------------------------------------")
print("Execution Time in Seconds=",DeltaTime)
print("MIPS=",MIPS) 
print("------------------------------------------")



C Program

#include <stdio.h>
#include <time.h>


int main(){

int i,j;
int N=100;
float a, DeltaTime, MIPS;
clock_t StartTime, EndTime;

StartTime=clock();

// This calculates n-time one million divisions

for (j=1;j<N; j++)
 {
    for(i=1;i<1000000;i=i+1)
     {
      a=(float)(i)/(float)(j);
     }
 }


EndTime=clock(); // measures time in microseconds

DeltaTime=(float)(EndTime - StartTime)/1000000;

MIPS=(1/DeltaTime)*N;

printf("------------------------------------------n");
printf("Execution Time in Seconds=%f n", DeltaTime);
printf("MIPS=%f n", MIPS);
printf("------------------------------------------n");

return 0;

}  
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.