MATLAB twice as fast as Numpy

Question:

I am an engineering grad student currently making the transition from MATLAB to Python for the purposes of numerical simulation. I was under the impression that for basic array manipulation, Numpy would be as fast as MATLAB. However, it appears for two different programs I write that MATLAB is a little under twice as fast as Numpy. The test code I am using for Numpy (Python 3.3) is:

import numpy as np
import time

a = np.random.rand(5000,5000,3)

tic = time.time()
a[:,:,0] = a[:,:,1]
a[:,:,2] = a[:,:,0]
a[:,:,1] = a[:,:,2]
toc = time.time() - tic
print(toc)

Whereas for MATLAB 2012a I am using:

a = rand(5000,5000,3);

tic;
a(:,:,1) = a(:,:,2);
a(:,:,3) = a(:,:,1);
a(:,:,2) = a(:,:,3);
toc

The algorithm I am using is the one used on a NASA website comparing Numpy and MATLAB. The website shows that Numpy surpasses MATLAB in terms of speed for this algorithm. Yet my results show a 0.49 s simulation time for Numpy and a 0.29 s simulation time for MATLAB. I also have run a Gauss-Seidel solver on both Numpy and Matlab and I get similar results (16.5 s vs. 9.5 s)

I am brand new to Python and am not extremely literate in terms of programming. I am using the WinPython 64 bit Python distribution but have also tried Pythonxy to no avail.

One thing I have read which should improve performance is building Numpy using MKL. Unfortunately I have no idea how to do this on Windows. Do I even need to do this?

Any suggestions?

Asked By: nicholls

||

Answers:

You are attempting to recreate the NASA experiment, however you have changed many of the variables. For example:

  • Your hardware and operating system is different (www.nccs.nasa.gov/dali_front.html)
  • Your Python version is different (2.5.3 vs 3.3)
  • Your MATLAB version is different (2008 vs 2012)

Assuming the NASA results are correct, the difference in results is due to one or more of these changed variables. I recommend you:

  • Retest with the SciPy prebuilt binaries.
  • Research if any improvements were made to MATLAB relative to this type of calculation.

Also, you may find this link useful.

Answered By: R Dub

That comparison ends up being apples to oranges due to caching, because it is more efficient to transfer or do some work on contiguous chunks of memory. This particular benchmark is memory bound, since in fact no computation is done, and thus the percentage of cache hits is key to achieve good performance.

Matlab lays the data in column-major order (Fortran order), so a(:,:,k) is a contiguous chunk of memory, which is fast to copy.

Numpy defaults to row-major order (C order), so in a[:,:,k] there are big jumps between elements and that slows down the memory transfer. Actually, the data layout can be chosen. In my laptop, creating the array with a = np.asfortranarray(np.random.rand(5000,5000,3)) leds to a 5x speed up (1 s vs 0.19 s).

This result should be very similar both for numpy-MKL and plain numpy because MKL is a fast LAPACK implementation and here you’re not calling any function that uses it (MKL definitely helps when solving linear systems, computing dot products…).

I don’t really know what’s going on on the Gauss Seidel solver, but some time ago I wrote an answer to a question titled Numpy running at half the speed of MATLAB that talks a little bit about MKL, FFT and Matlab’s JIT.

Answered By: jorgeca