Killing child process/task without killing main in Python using Pool Executor

Question:

I am trying to implement a method to force stop the child that have been started with ThreadPoolExecutor / ProcessPoolExecutor. I would like a cross platform implementation (Windows and Linux).

When the signal is triggered from main, the main process exits and I do NOT want that, only the child.

What is the correct way to force the child to quit? I do NOT want Events because in the following example I can have a while loop that never gets to event.is_set() again
eg:

while not event.is_set():
    # Do stuff
    while waiting_for_something:
        # Here is blocked

Here is the code I am using but I miss something and I don’t know what:

import os
import signal
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
import time


def handler(signum, frame):
    print(signum, os.getpid())
    os.kill(os.getpid(), signal.SIGINT)


class asd:
    def __init__(self):
        pass

    def run(self):
        signal.signal(signal.SIGBREAK, handler)
        while True:
            print('running thread', os.getpid())
            time.sleep(1)
            while True:
                print('running 2 ', os.getpid())
                time.sleep(1)
            print("after while")


if __name__ == "__main__":
    t1 = asd()
    pool = ProcessPoolExecutor(max_workers=4)
    # pool = ThreadPoolExecutor(max_workers=4)
    pool.submit(t1.run)

    print('running main', os.getpid())

    time.sleep(3)

    signal.raise_signal(signal.SIGBREAK)

    while True:
        print("after killing process")
        time.sleep(1)

Thank you!

Asked By: Geani Orlando

||

Answers:

you are sending the signal to your main python process not to the children.

in order to send signals to your children you need their PID, which is not available using the concurrent module, instead you should use multiprocess.Pool, then you can get the PID of the children and send the signal to them using os.kill
just remember to eventually use pool.terminate() to guarantee resources cleanup.

import os
import signal
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
import time
import psutil
import multiprocessing

def handler(signum, frame):
    print(signum, os.getpid())
    os.kill(os.getpid(), signal.SIGINT)


class asd:
    def __init__(self):
        pass

    def run(self):
        signal.signal(signal.SIGBREAK, handler)
        while True:
            print('running thread', os.getpid())
            time.sleep(1)
            while True:
                print('running 2 ', os.getpid())
                time.sleep(1)
            print("after while")

if __name__ == "__main__":
    t1 = asd()
    pool = multiprocessing.Pool(4)

    children = multiprocessing.active_children()
    res = pool.apply_async(t1.run)
    print('running main', os.getpid())

    time.sleep(3)

    for child in children:
        os.kill(child.pid,signal.SIGBREAK)

    while True:
        print("after killing process")
        time.sleep(1)

with result

running main 16860
running thread 14212
running 2  14212
running 2  14212
after killing process
after killing process
after killing process
Answered By: Ahmed AEK

You can take a look at pebble which has been designed to solve this problem transparently for the User.

It provides concurrent.futures compatible APIs and allows to end a processing job either by cancelling the returned Future object or by setting a computing timeout.

import time
from pebble import ProcessPool
from concurrent.futures import TimeoutError

TIMEOUT = 5

def function(sleep):
    while True:
        time.sleep(sleep)

with ProcessPool() as pool:
    future = pool.submit(function, TIMEOUT, 1)

assert isinstance(future.exception(), TimeoutError)

Note that you cannot stop executing threads in Python so only process pools can support such functionality.

Answered By: noxdafox