python multiprocessing vs threading for cpu bound work on windows and linux

Question:

So I knocked up some test code to see how the multiprocessing module would scale on cpu bound work compared to threading. On linux I get the performance increase that I’d expect:

linux (dual quad core xeon):
serialrun took 1192.319 ms
parallelrun took 346.727 ms
threadedrun took 2108.172 ms

My dual core macbook pro shows the same behavior:

osx (dual core macbook pro)
serialrun took 2026.995 ms
parallelrun took 1288.723 ms
threadedrun took 5314.822 ms

I then went and tried it on a windows machine and got some very different results.

windows (i7 920):
serialrun took 1043.000 ms
parallelrun took 3237.000 ms
threadedrun took 2343.000 ms

Why oh why, is the multiprocessing approach so much slower on windows?

Here’s the test code:

#!/usr/bin/env python

import multiprocessing
import threading
import time

def print_timing(func):
    def wrapper(*arg):
        t1 = time.time()
        res = func(*arg)
        t2 = time.time()
        print '%s took %0.3f ms' % (func.func_name, (t2-t1)*1000.0)
        return res
    return wrapper


def counter():
    for i in xrange(1000000):
        pass

@print_timing
def serialrun(x):
    for i in xrange(x):
        counter()

@print_timing
def parallelrun(x):
    proclist = []
    for i in xrange(x):
        p = multiprocessing.Process(target=counter)
        proclist.append(p)
        p.start()

    for i in proclist:
        i.join()

@print_timing
def threadedrun(x):
    threadlist = []
    for i in xrange(x):
        t = threading.Thread(target=counter)
        threadlist.append(t)
        t.start()

    for i in threadlist:
        i.join()

def main():
    serialrun(50)
    parallelrun(50)
    threadedrun(50)

if __name__ == '__main__':
    main()
Asked By: manghole

||

Answers:

It’s been said that creating processes on Windows is more expensive than on linux. If you search around the site you will find some information. Here’s one I found easily.

Answered By: Duck

Processes are much more lightweight under UNIX variants. Windows processes are heavy and take much more time to start up. Threads are the recommended way of doing multiprocessing on windows.

Answered By: Byron Whitlock

The python documentation for multiprocessing blames the lack of os.fork() for the problems in Windows. It may be applicable here.

See what happens when you import psyco. First, easy_install it:

C:Usershughdbrown>Python26scriptseasy_install.exe psyco
Searching for psyco
Best match: psyco 1.6
Adding psyco 1.6 to easy-install.pth file

Using c:python26libsite-packages
Processing dependencies for psyco
Finished processing dependencies for psyco

Add this to the top of your python script:

import psyco
psyco.full()

I get these results without:

serialrun took 1191.000 ms
parallelrun took 3738.000 ms
threadedrun took 2728.000 ms

I get these results with:

serialrun took 43.000 ms
parallelrun took 3650.000 ms
threadedrun took 265.000 ms

Parallel is still slow, but the others burn rubber.

Edit: also, try it with the multiprocessing pool. (This is my first time trying this and it is so fast, I figure I must be missing something.)

@print_timing
def parallelpoolrun(reps):
    pool = multiprocessing.Pool(processes=4)
    result = pool.apply_async(counter, (reps,))

Results:

C:UsershughdbrownDocumentspythonStackOverflow>python  1289813.py
serialrun took 57.000 ms
parallelrun took 3716.000 ms
parallelpoolrun took 128.000 ms
threadedrun took 58.000 ms
Answered By: hughdbrown

Currently, your counter() function is not modifying much state. Try changing counter() so that it modifies many pages of memory. Then run a cpu bound loop. See if there is still a large disparity between linux and windows.

I’m not running python 2.6 right now, so I can’t try it myself.

Answered By: Karl Voigtland

Just starting the pool takes a long time. I have found in ‘real world’ programs if I can keep a pool open and reuse it for many different processes,passing the reference down through method calls (usually using map.async) then on Linux I can save a few percent but on Windows I can often halve the time taken. Linux is always quicker for my particular problems but even on Windows I get net benefits from multiprocessing.

Answered By: Paul Wells
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.