How can I "fire and forget" a task without blocking main thread?

Question:

What I have in mind is a very generic BackgroundTask class that can be used within webservers or standalone scripts, to schedule away tasks that don’t need to be blocking.

I don’t want to use any task queues (celery, rabbitmq, etc.) here because the tasks I’m thinking of are too small and fast to run. Just want to get them done as out of the way as possible. Would that be an async approach? Throwing them onto another process?

First solution I came up with that works:

# Need ParamSpec to get correct type hints in BackgroundTask init
P = ParamSpec("P")


class BackgroundTask(metaclass=ThreadSafeSingleton):
    """Easy way to create a background task that is not dependent on any webserver internals.

    Usage:
        async def sleep(t):
            time.sleep(t)

        BackgroundTask(sleep, 10) <- Creates async task and executes it separately (nonblocking, works with coroutines)
        BackgroundTask(time.sleep, 9) <- Creates async task and executes it separately (nonblocking, works with normal functions)
    """

    background_tasks = set()
    lock = threading.Lock()

    def __init__(self, func: typing.Callable[P, typing.Any], *args: P.args, **kwargs: P.kwargs) -> None:
        """Uses singleton instance of BackgroundTask to add a task to the async execution queue.

        Args:
            func (typing.Callable[P, typing.Any]): _description_
        """
        self.func = func
        self.args = args
        self.kwargs = kwargs
        self.is_async = asyncio.iscoroutinefunction(func)

    async def __call__(self) -> None:
        if self.is_async:
            with self.lock:
                task = asyncio.create_task(self.func(*self.args, **self.kwargs))
                self.background_tasks.add(task)
                print(len(self.background_tasks))
                task.add_done_callback(self.background_tasks.discard)

        # TODO: Create sync task (this will follow a similar pattern)


async def create_background_task(func: typing.Callable[P, typing.Any], *args: P.args, **kwargs: P.kwargs) -> None:
    b = BackgroundTask(func, *args, **kwargs)
    await b()


# Usage:
async def sleep(t):
    time.sleep(t)

await create_background_task(sleep, 5)

I think I missed the point by doing this though. If I ran this code along with some other async code, then yes, I would get a performance benefit since blocking operations aren’t blocking the main thread anymore.

I’m thinking I maybe need something more like a separate process to handle such background tasks without blocking the main thread at all (the above async code will still be run on the main thread).

Does it make sense to have a separate thread that handles background jobs? Like a simple job queue but very lightweight and does not require additional infrastructure?

Or does it make sense to create a solution like the one above?

I’ve seen that Starlette does something like this (https://github.com/encode/starlette/blob/decc5279335f105837987505e3e477463a996f3e/starlette/background.py#L15) but they await the background tasks AFTER a response is returned.

This makes their solution dependent on a web server design (i.e. doing things after response is sent is OK). I’m wondering if we can build something more generic where you can run background tasks in scripts or webservers alike, without sacrificing performance.

Not that familiar with async/concurrency features, so don’t really know how to compare these solutions. Seems like an interesting problem!

Here is what I came up with trying to perform the tasks on another process:


class BackgroundTask(metaclass=ThreadSafeSingleton):
    """Easy way to create a background task that is not dependent on any webserver internals.

    Usage:
        async def sleep(t):
            time.sleep(t)

        BackgroundTask(sleep, 10) <- Creates async task and executes it separately (nonblocking, works with coroutines)
        BackgroundTask(time.sleep, 9) <- Creates async task and executes it separately (nonblocking, works with normal functions)
        BackgroundTask(es.transport.close) <- Probably most common use in our codebase
    """

    background_tasks = set()
    executor = concurrent.futures.ProcessPoolExecutor(max_workers=2)
    lock = threading.Lock()

    def __init__(self, func: typing.Callable[P, typing.Any], *args: P.args, **kwargs: P.kwargs) -> None:
        """Uses singleton instance of BackgroundTask to add a task to the async execution queue.

        Args:
            func (typing.Callable[P, typing.Any]): _description_
        """
        self.func = func
        self.args = args
        self.kwargs = kwargs
        self.is_async = asyncio.iscoroutinefunction(func)

    async def __call__(self) -> None:
        if self.is_async:
            with self.lock:
                loop = asyncio.get_running_loop()
                with self.executor as pool:
                    result = await loop.run_in_executor(
                        pool, functools.partial(self.func, *self.args, **self.kwargs))

Asked By: Rami Awar

||

Answers:

You could try something like this:

import multiprocessing


class MPPool:
    def __init__(self, num=multiprocessing.cpu_count() - 1):
        self.pool = multiprocessing.Pool(num)

    def __call__(self, f, *args, **kwargs):
        self.pool.apply_async(f, args=args, kwds=kwargs)


def run_and_forget(f, *args, **kwargs):
    if "pool" not in run_and_forget.__dict__:
        run_and_forget.pool = MPPool()

    run_and_forget.pool(f, *args, **kwargs)


if __name__ == '__main__':
    import time

    def test(n):
        time.sleep(n)
        print(f"done {n}")

    for i in range(20):
        run_and_forget(test, i)
        print(f"passed {i}")

    time.sleep(50)
    print("end")

The function run_and_forget can be used anywhere (within a single process), as the member pool is static-like and therefore defined at the first call.

This wasn’t fully tested, but I’ve provided some quick test code to see how it works. First things that comes to mind is that before exiting it would be smart to cleanup the multiprocessing pool.

Answered By: bobo

I’ll answer "what you’ve asked", but I’ll preface that you may be asking the wrong question due to a lack of understanding.

In Python stdlib, subprocess can spin up separate independent processes that behave like "fire and forget". Here’s a couple:

import os, subprocess
subprocess.Popen(['mkdir', 'foo'])
os.popen('touch answer_is_$((1 + 2))')

It’d be much better to provide concrete examples of these "small and fast non-blocking tasks" you’d like to have, complete with the environment you’ll want them to be running in. You’re missing some understanding that’s evident b/c some of your statements conflict with others. For example, asyncio and threading don’t operate like "fire and forget" at all.

Also, there’s not going to be a good way to "background within any context" b/c the differences between different contexts matter, and "what’s best" depends on many factors.

Answered By: Kache

Your questions are so abstract that I’ll try to give common answers to all of them.

How can I "fire and forget" a task without blocking main thread?

It depends on what you mean by saying forget.

  • If you are not planning to access that task after running, you can run it in a parallel process.
  • If the main application should be able to access a background task, then you should have an event-driven architecture. In that case, the things previously called tasks will be services or microservices.

I don’t want to use any task queues (celery, rabbitmq, etc.) here because the tasks I’m thinking of are too small and fast to run. Just want to get them done as out of the way as possible. Would that be an async approach? Throwing them onto another process?

If it contains loops or other CPU-bound operations, then right to use a subprocess. If the task makes a request (async), reads files, logs to stdout, or other I/O bound operations, then it is right to use coroutines or threads.

Does it make sense to have a separate thread that handles background jobs? Like a simple job queue but very lightweight and does not require additional infrastructure?

We can’t just use a thread as it can be blocked by another task that uses CPU-bound operations. Instead, we can run a background process and use pipes, queues, and events to communicate between processes. Unfortunately, we cannot provide complex objects between processes, but we can provide basic data structures to handle status changes of the tasks running in the background.

Regarding the Starlette and the BackgroundTask

Starlette is a lightweight ASGI framework/toolkit, which is ideal for building async web services in Python. (README description)

It is based on concurrency. So even this is not a generic solution for all kinds of tasks.
NOTE: Concurrency differs from parallelism.

I’m wondering if we can build something more generic where you can run background tasks in scripts or webservers alike, without sacrificing performance.

The above-mentioned solution suggests use a background process. Still, it will depend on the application design as you must do things (emit an event, add an indicator to the queue, etc.) that are needed for communication and synchronization of running processes (tasks). There is no generic tool for that, but there are situation-dependent solutions.

Situation 1 – The tasks are asynchronous functions

Suppose we have a request function that should call an API without blocking the work of other tasks. Also, we have a sleep function that should not block anything.

import asyncio
import aiohttp


async def request(url):
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            try:
                return await response.json()
            except aiohttp.ContentTypeError:
                return await response.read()


async def sleep(t):
    await asyncio.sleep(t)


async def main():
    background_task_1 = asyncio.create_task(request("https://google.com/"))
    background_task_2 = asyncio.create_task(sleep(5))

    ...  # here we can do even CPU-bound operations

    result1 = await background_task_1

    ...  # use the 'result1', etc.

    await background_task_2


if __name__ == "__main__":
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())
    loop.close()

In this situation, we use asyncio.create_task to run a coroutine concurrently (like in the background). Sure we could run it in a subprocess, but there is no reason for that as it would use more resources without improving the performance.

Situation 2 – The tasks are synchronous functions (I/O bound)

Unlike the first situation where the functions were already asynchronous, in this situation, those are synchronous but not CPU-bound (I/O bound). This gives an ability to run them in threads or make them asynchronous (using asyncio.to_thread) and run concurrently.

import time
import asyncio
import requests


def asynchronous(func):
    """
    This decorator converts a synchronous function to an asynchronous
    
    Usage:
        @asynchronous
        def sleep(t):
            time.sleep(t)
            
        async def main():
            await sleep(5)
    """
    
    async def wrapper(*args, **kwargs):
        await asyncio.to_thread(func, *args, **kwargs)

    return wrapper


@asynchronous
def request(url):
    with requests.Session() as session:
        response = session.get(url)
        try:
            return response.json()
        except requests.JSONDecodeError:
            return response.text


@asynchronous
def sleep(t):
    time.sleep(t)

    
async def main():
    background_task_1 = asyncio.create_task(request("https://google.com/"))
    background_task_2 = asyncio.create_task(sleep(5))
    ...

Here we used a decorator to convert a synchronous (I/O bound) function to an asynchronous one and use them like in the first situation.

Situation 3 – The tasks are synchronous functions (CPU-bound)

To run CPU-bound tasks parallelly in the background we have to use multiprocessing. And for ensuring the task is done we use the join method.

import time
import multiprocessing


def task():
    for i in range(10):
        time.sleep(0.3)


def main():
    background_task = multiprocessing.Process(target=task)
    background_task.start()

    ...  # do the rest stuff that does not depend on the background task

    background_task.join()  # wait until the background task is done

    ...  # do stuff that depends on the background task


if __name__ == "__main__":
    main()

Suppose the main application depends on the parts of the background task. In this case, we need an event-driven design as the join cannot be called multiple times.

import multiprocessing

event = multiprocessing.Event()


def task():
    ...  # synchronous operations

    event.set()  # notify the main function that the first part of the task is done

    ...  # synchronous operations

    event.set()  # notify the main function that the second part of the task is also done

    ...  # synchronous operations


def main():
    background_task = multiprocessing.Process(target=task)
    background_task.start()

    ...  # do the rest stuff that does not depend on the background task

    event.wait()  # wait until the first part of the background task is done

    ...  # do stuff that depends on the first part of the background task

    event.wait()  # wait until the second part of the background task is done

    ...  # do stuff that depends on the second part of the background task

    background_task.join()  # wait until the background task is finally done

    ...  # do stuff that depends on the whole background task


if __name__ == "__main__":
    main()

As you already noticed with events we can just provide binary information and those are not effective if the processes are more than two (It will be impossible to know where the event was emitted from). So we use pipes, queues, and manager to provide non-binary information between the processes.

Answered By: Artyom Vancyan

To be able to "forget" you need to tie the task to an instance existence, so that it will init/del in reference to something you are already using, the example you added has nothing wrong but when is so short to implementation it is more a "fit that exact need" and not a general use case. You can extend that example to make stuff async can’t do as easily, in my example I went for a "mutable" variable, self won’t change during the instance lifetime (but the content can change without changing the address of self) so you can both "forget" of the task during the existence and feed it different values just because it takes self as an argument.

This async implementation is a very simplistic one, it consists of a list that gets added to the task you need and returns the "handle" to get the return or terminate it,
each time a thread pops a call it is getting removed or readded, this is to keep it as a FIFO list, already executed calls must be added as a new one and not an old one

import threading as th
import time

class CTask(object):

    once  = 1
    loop  = 8

    def __init__(self, foo, args, kwargs, type):
        self.__kwargs = kwargs
        self.__args = args
        self.__foo = foo
        self.__ret = None

        self.task_type = type
        self.terminated = False

    def pool_call(self):
        self.__ret = self.__foo(
            *self.__args,
            **self.__kwargs
        )
        if self.task_type == self.once:
            self.terminated = True

    def __call__(self):
        return self.__ret

    def terminate(self):
        self.terminated = True

class Worker_Pool:

    def push_task(self, type=CTask.once):
        def func_wraper(func):
            def varg_wraper(*a, **b):
                ntask = CTask(func, a, b, type)
                self.__lock.acquire(True, -1)
                self.__ref.append(ntask)
                self.__lock.release()
                return ntask
            return varg_wraper
        return func_wraper

    def __init__(self, n_work):
        self.__lock = th.Lock()
        self.__pool = []
        self.__ref = []
        self.alive = True

        for _ in range(n_work):
            self.__pool.append(
                th.Thread(target=self.__run)
            )
        for w in self.__pool:
            w.start()

    def __run(self):
        while self.alive:
            wtask = None
            if self.__lock.acquire(True, 0.1):
                if len(self.__ref) > 0:
                    wtask = self.__ref.pop(0)
                self.__lock.release()

            if wtask is None or wtask.terminated:
                continue

            wtask.pool_call()

            if wtask.task_type == CTask.loop:
                self.__lock.acquire(True, -1)
                self.__ref.append(wtask)
                self.__lock.release()

    def terminate(self):
        self.alive = False
        for w in self.__pool:
            w.join()

bg = Worker_Pool(3)

class Some_App:

    @bg.push_task(CTask.loop)
    def do_task(self):
        print(f'[{self.id}] -> {self.num}')
        time.sleep(0.5)

    @bg.push_task()
    def do_mult(self):
        self.num *= self.mul

    def __init__(self, id, imul):
        self.mul = imul
        self.num = 1
        self.id = id

        self.bg_task = self.do_task()

    def __del__(self):
        # you should make sure that the the 
        # function is no longer in execution
        # before continuing this function
        self.bg_task.terminate()

if __name__ == '__main__':
    inst_a = Some_App(1, 2)
    task_a = inst_a.do_task()
    inst_b = Some_App(5, 6)
    task_b = inst_b.do_task()

    print('')
    inst_a.do_mult()
    inst_b.do_mult()
    time.sleep(0.6)

    print('')
    inst_a.do_mult()
    inst_b.do_mult()
    time.sleep(1.2)

    bg.terminate()

Other answers use multiprocessing/subprocess and for a light task I would describe those as shooting yourself in the foot, subprocess uses text to communicate and the process uses sockets+picke so it is not the best choice.

Answered By: SrPanda

The solutions you mentioned in your questions achieve what you ask "fire and forget" but the hidden question I hear is what’s the efficient / common way to do it.

Like many problems in computer science the answer is – it depends. I’ll try to explain.
Basically you have two methods: execute in a separate thread or a process.

Using threads you get shared memory access, and they’re lighter in terms of resource usage. It also performs context-switching better than process context-switching.

With processes, you can utilize more CPUs, but you lose the shared memory and (depends on the amount of processes you run vs. number of cores) you may run into more context-switching. (e.g, if you run a container with 2 cpus and 8 processes, they all gonna race for CPU time and more context-switch is probably to happen).

For a concrete example, let’s consider two scenarios. If your app needs to do CPU intensive tasks (e.g, encrypting, compressing, etc.) you would probably get better performance utilizing more CPUs.

On the other hand, if your tasks are blocking due to I/O (e.g, waiting for network, reading from disk, etc.) using threads probably be faster than using difference process for that.

Another thing to note is the communication between your TaskManager and the TaskExecutor. If the two needs to communicate your performance will be better using threads since the shared memory access.

To summarize, the better way to do it is to run performance tests because it really depends on the work you need to get done. My suggestion is start with the simpler solution (threads), get a baseline for your performance results and then try out processes. Running programs in multi-processing environment is considered more complex than multi-threaded.

My 2cent – when dealing with performance questions, you gotta have a baseline and compare to it, otherwise you’re guessing in the dark and spend your time tweaking the wrong parts of your application.

Answered By: Chen A.