How does Waitress handle concurrent tasks?

Question:

I’m trying to build a python webserver using Django and Waitress, but I’d like to know how Waitress handles concurrent requests, and when blocking may occur.


While the Waitress documentation mentions that multiple worker threads are available, it doesn’t provide a lot of information on how they are implemented and how the python GIL affects them (emphasis my own):

When a channel determines the client has sent at least one full valid HTTP request, it schedules a “task” with a “thread dispatcher”. The thread dispatcher maintains a fixed pool of worker threads available to do client work (by default, 4 threads). If a worker thread is available when a task is scheduled, the worker thread runs the task. The task has access to the channel, and can write back to the channel’s output buffer. When all worker threads are in use, scheduled tasks will wait in a queue for a worker thread to become available.

There doesn’t seem to be much information on Stackoverflow either. From the question “Is Gunicorn’s gthread async worker analogous to Waitress?”:

Waitress has a master async thread that buffers requests, and enqueues each request to one of its sync worker threads when the request I/O is finished.


These statements don’t address the GIL (at least from my understanding) and it’d be great if someone could elaborate more on how worker threads work for Waitress. Thanks!

Asked By: evantkchong

||

Answers:

Here’s how the event-driven asynchronous servers generally work:

  • Start a process and listen to incoming requests. Utilizing the event notification API of the operating system makes it very easy to serve thousands of clients from single thread/process.
  • Since there’s only one process managing all the connections, you don’t want to perform any slow (or blocking) tasks in this process. Because then it will block the program for every client.
  • To perform blocking tasks, the server delegates the tasks to “workers”. Workers can be threads (running in the same process) or separate processes (or subprocesses). Now the main process can keep on serving clients while workers perform the blocking tasks.

How does Waitress handle concurrent tasks?

Pretty much the same way I just described above. And for workers it creates threads, not processes.

how the python GIL affects them

Waitress uses threads for workers. So, yes they are affected by GIL in that they aren’t truly concurrent though they seem to be. “Asynchronous” is the correct term.

Threads in Python run inside a single process, on a single CPU core, and don’t run in parallel. A thread acquires the GIL for a very small amount of time and executes its code and then the GIL is acquired by another thread.

But since the GIL is released on network I/O, the parent process will always acquire the GIL whenever there’s a network event (such as an incoming request) and this way you can stay assured that the GIL will not affect the network bound operations (like receiving requests or sending response).

On the other hand, Python processes are actually concurrent: they can run in parallel on multiple cores. But Waitress doesn’t use processes.

Should you be worried?

If you’re just doing small blocking tasks like database read/writes and serving only a few hundred users per second, then using threads isn’t really that bad.

For serving a large volume of users or doing long running blocking tasks, you can look into using external task queues like Celery. This will be much better than spawning and managing processes yourself.

Answered By: xyres

Hint: Those were my comments to the accepted answer and the conversation below, moved to a separate answer for space reasons.

Wait.. The 5th request will stay in the queue until one of the 4 threads is done with their previous handling, and therefore gone back to the pool. One thread will only ever server one request at a time. "IO bound" tasks only help in that the threads waiting for IO will implicitly (e.g. by calling time.sleep) tell the scheduler (python’s internal one) that it can pass the GIL along to another thread since there’s currently nothing to do, so that the others will get more CPU time for their stuff. On thread level this is fully sequential, which is still concurrent and asynchronous on process level, just not parallel. Just to get some wording staight.

Also, Python threads are "standard" OS threads (like those in C). So they will use all CPU cores and make full use of them. The only thing restricting them is that they need to hold the GIL when calling Python C-API functions, because the whole API in general is not thread-safe. On the other hand, calls to non-Python functions, i.e. functions in C extensions like numpy for example, but also many database APIs, including anything loaded via ctypes, do not hold the GIL while running. Why should they, they are running external C binaries which don’t know anything of the Python interpreter running in the parent process. Therefore, such tasks will run truely in parallel when called from a WSGI app hosted by waitress. And if you’ve got more cores available, turn the thread number up to that amount (threads=X kwarg on waitress.create_server).

Answered By: Jeronimo
Categories: questions Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.