Why does a python function work in parallel even if it should not?

Question:

I am running this code using the healpy package. I am not using multiprocessing and I need it to run on a single core. It worked for a certain amount of time, but, when I run it now, the function healpy.projector.GnomonicProj.projmap takes all the available cores.

This is the incriminated code block:

def Stacking () :

    f = lambda x,y,z: pixelfunc.vec2pix(xsize,x,y,z,nest=False)
    map_array = pixelfunc.ma_to_array(data)
    im = np.zeros((xsize, xsize))
    plt.figure()

    for i in range (nvoids) :
        sys.stdout.write("r" + str(i+1) + "/" + str(nvoids))
        sys.stdout.flush()
        proj = hp.projector.GnomonicProj(rot=[rav[i],decv[i]], xsize=xsize, reso=2*nRad*rad_deg[i]*60/(xsize))
        im += proj.projmap(map_array, f)

    im/=nvoids
    plt.imshow(im)
    plt.colorbar()
    plt.title(title + " (Map)")
    plt.savefig("../Plots/stackedMap_"+name+".png")

    return im

Does someone know why this function is running in parallel? And most important, does someone know a way to run it in a single core?

Thank you!

Asked By: Simone Sartori

||

Answers:

In this thread they recommend to set the environment variable OMP_NUM_THREADS accordingly:

Worked with:

import os
os.environ['OMP_NUM_THREADS'] = '1'
import healpy as hp
import numpy as np

os.environ['OMP_NUM_THREADS'] = '1' have to be done before import numpy and healpy libraries.

As to the why: probably they use some parallelization techniques wrapped within their implementation of the functions you use. According to the name of the variable, I would guess OpenMP it is.

Answered By: moooeeeep