Speeding up build process with distutils

Question:

I am programming a C++ extension for Python and I am using distutils to compile the project. As the project grows, rebuilding it takes longer and longer. Is there a way to speed up the build process?

I read that parallel builds (as with make -j) are not possible with distutils. Are there any good alternatives to distutils which might be faster?

I also noticed that it’s recompiling all object files every time I call python setup.py build, even when I only changed one source file. Should this be the case or might I be doing something wrong here?

In case it helps, here are some of the files which I try to compile: https://gist.github.com/2923577

Thanks!

Asked By: Lucas

||

Answers:

In the limited examples you provided in the link, it seems fairly obvious that you have some misunderstanding on what some of the features of the language are. For example, the gsminterface.h has a whole lot of namespace level statics, which is probably unintended. Every translation unit that includes that header will compile it’s own version for everyone of the symbols declared in that header. Side effects of this are not only the compile time but also code bloat (larger binaries) and link time as the linker needs to process all those symbols.

There are still many questions that affect the build process that you have not answered, for example, whether you clean every time before you recompile. If you are doing that, then you might want to consider ccache, which is a tool that caches the result of the build process, so that if you run make clean; make target only the preprocessor will be run for any translation unit that has not changed. Note that as long as you keep maintaining most code in headers, this will not offer much of an advantage, as a change in a header modifies all translation units that include it. (I don’t know your build system, so I cannot tell you whether python setup.py build will clean or not)

The project does not seem large otherwise, so I would be surprised if it took more than a few seconds to compile.

  1. Try building with environment variable CC="ccache gcc", that will speed up build significantly when the source has not changed. (strangely, distutils uses CC also for c++ source files). Install the ccache package, of course.

  2. Since you have a single extension which is assembled from multiple compiled object files, you can monkey-patch distutils to compile those in parallel (they are independent) – put this into your setup.py (adjust the N=2 as you wish):

    # monkey-patch for parallel compilation
    def parallelCCompile(self, sources, output_dir=None, macros=None, include_dirs=None, debug=0, extra_preargs=None, extra_postargs=None, depends=None):
        # those lines are copied from distutils.ccompiler.CCompiler directly
        macros, objects, extra_postargs, pp_opts, build = self._setup_compile(output_dir, macros, include_dirs, sources, depends, extra_postargs)
        cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)
        # parallel code
        N=2 # number of parallel compilations
        import multiprocessing.pool
        def _single_compile(obj):
            try: src, ext = build[obj]
            except KeyError: return
            self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
        # convert to list, imap is evaluated on-demand
        list(multiprocessing.pool.ThreadPool(N).imap(_single_compile,objects))
        return objects
    import distutils.ccompiler
    distutils.ccompiler.CCompiler.compile=parallelCCompile
    
  3. For the sake of completeness, if you have multiple extensions, you can use the following solution:

    import os
    import multiprocessing
    try:
        from concurrent.futures import ThreadPoolExecutor as Pool
    except ImportError:
        from multiprocessing.pool import ThreadPool as LegacyPool
    
        # To ensure the with statement works. Required for some older 2.7.x releases
        class Pool(LegacyPool):
            def __enter__(self):
                return self
    
            def __exit__(self, *args):
                self.close()
                self.join()
    
    def build_extensions(self):
        """Function to monkey-patch
        distutils.command.build_ext.build_ext.build_extensions
    
        """
        self.check_extensions_list(self.extensions)
    
        try:
            num_jobs = os.cpu_count()
        except AttributeError:
            num_jobs = multiprocessing.cpu_count()
    
        with Pool(num_jobs) as pool:
            pool.map(self.build_extension, self.extensions)
    
    def compile(
        self, sources, output_dir=None, macros=None, include_dirs=None,
        debug=0, extra_preargs=None, extra_postargs=None, depends=None,
    ):
        """Function to monkey-patch distutils.ccompiler.CCompiler"""
        macros, objects, extra_postargs, pp_opts, build = self._setup_compile(
            output_dir, macros, include_dirs, sources, depends, extra_postargs
        )
        cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)
    
        for obj in objects:
            try:
                src, ext = build[obj]
            except KeyError:
                continue
            self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
    
        # Return *all* object filenames, not just the ones we just built.
        return objects
    
    
    from distutils.ccompiler import CCompiler
    from distutils.command.build_ext import build_ext
    build_ext.build_extensions = build_extensions
    CCompiler.compile = compile
    
Answered By: eudoxos

I’ve got this working on Windows with clcache, derived from eudoxos’s answer:

# Python modules
import datetime
import distutils
import distutils.ccompiler
import distutils.sysconfig
import multiprocessing
import multiprocessing.pool
import os
import sys

from distutils.core import setup
from distutils.core import Extension
from distutils.errors import CompileError
from distutils.errors import DistutilsExecError

now = datetime.datetime.now

ON_LINUX = "linux" in sys.platform

N_JOBS = 4

#------------------------------------------------------------------------------
# Enable ccache to speed up builds

if ON_LINUX:
    os.environ['CC'] = 'ccache gcc'

# Windows
else:

    # Using clcache.exe, see: https://github.com/frerich/clcache

    # Insert path to clcache.exe into the path.

    prefix = os.path.dirname(os.path.abspath(__file__))
    path = os.path.join(prefix, "bin")

    print "Adding %s to the system path." % path
    os.environ['PATH'] = '%s;%s' % (path, os.environ['PATH'])

    clcache_exe = os.path.join(path, "clcache.exe")

#------------------------------------------------------------------------------
# Parallel Compile
#
# Reference:
#
# http://stackoverflow.com/questions/11013851/speeding-up-build-process-with-distutils
#

def linux_parallel_cpp_compile(
        self,
        sources,
        output_dir=None,
        macros=None,
        include_dirs=None,
        debug=0,
        extra_preargs=None,
        extra_postargs=None,
        depends=None):

    # Copied from distutils.ccompiler.CCompiler

    macros, objects, extra_postargs, pp_opts, build = self._setup_compile(
        output_dir, macros, include_dirs, sources, depends, extra_postargs)

    cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)

    def _single_compile(obj):

        try:
            src, ext = build[obj]
        except KeyError:
            return

        self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)

    # convert to list, imap is evaluated on-demand

    list(multiprocessing.pool.ThreadPool(N_JOBS).imap(
        _single_compile, objects))

    return objects


def windows_parallel_cpp_compile(
        self,
        sources,
        output_dir=None,
        macros=None,
        include_dirs=None,
        debug=0,
        extra_preargs=None,
        extra_postargs=None,
        depends=None):

    # Copied from distutils.msvc9compiler.MSVCCompiler

    if not self.initialized:
        self.initialize()

    macros, objects, extra_postargs, pp_opts, build = self._setup_compile(
        output_dir, macros, include_dirs, sources, depends, extra_postargs)

    compile_opts = extra_preargs or []
    compile_opts.append('/c')

    if debug:
        compile_opts.extend(self.compile_options_debug)
    else:
        compile_opts.extend(self.compile_options)

    def _single_compile(obj):

        try:
            src, ext = build[obj]
        except KeyError:
            return

        input_opt = "/Tp" + src
        output_opt = "/Fo" + obj
        try:
            self.spawn(
                [clcache_exe]
                + compile_opts
                + pp_opts
                + [input_opt, output_opt]
                + extra_postargs)

        except DistutilsExecError, msg:
            raise CompileError(msg)

    # convert to list, imap is evaluated on-demand

    list(multiprocessing.pool.ThreadPool(N_JOBS).imap(
        _single_compile, objects))

    return objects

#------------------------------------------------------------------------------
# Only enable parallel compile on 2.7 Python

if sys.version_info[1] == 7:

    if ON_LINUX:
        distutils.ccompiler.CCompiler.compile = linux_parallel_cpp_compile

    else:
        import distutils.msvccompiler
        import distutils.msvc9compiler

        distutils.msvccompiler.MSVCCompiler.compile = windows_parallel_cpp_compile
        distutils.msvc9compiler.MSVCCompiler.compile = windows_parallel_cpp_compile

# ... call setup() as usual
Answered By: Nick

You can do this easily if you have Numpy 1.10 available. Just add:

 try:
     from numpy.distutils.ccompiler import CCompiler_compile
     import distutils.ccompiler
     distutils.ccompiler.CCompiler.compile = CCompiler_compile
 except ImportError:
     print("Numpy not found, parallel compile not available")

Use -j N or set NPY_NUM_BUILD_JOBS.

Answered By: Henry Schreiner
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.