Using Python, why would sys.exit(0) hang, and not properly close a program?

Question:

The bug I am running into is that sys.exit(0) is not properly closing my program.

I use pyinstaller to make this program an exe. When I use the exe, the program has to be shutdown using task manager. (I am running the program on a Windows 10 machine) If I run it from visual studio code it hangs in the terminal and I have to close the terminal. That is, unless I run it in debug mode. It will properly close in debug mode, after a momentary delay.

I have a fairly large program, and I cant figure out where the error is coming from, or what is causing it, so I couldn’t cant include the code that is causing the issue or a minimum reproducible example. Though, it could have something to do with the multiprocessing module.

I do use multiprocessing.freeze_support() and debug mode does give a warning about this. Could debug mode’s interaction with freeze support cause it to bypass whatever this issue is? If not, what could cause sys.exit(0) to hang, but only if you are not using debug mode?

Thank you in advance for any help or suggestions provided.

Asked By: Cemos121

||

Answers:

For the record, extracting the resolution from the comments: multiprocessing creates threads for its own internal purposes. Internal threads handle feeding, and extracting, objects to and from the OS-level machinery supporting interprocess multiprocessing queues.

Part of Python’s "clean shutdown" sequence is waiting for all non-daemon threads to finish. If a program hasn’t emptied all multiprocessing queues, Python may wait forever for those internal worker threads to finish.

[NOTE: see @Charchit Agarwal’s comment for a correction to that: it’s not Python’s shutdown directly that waits forever, it’s multiprocessing‘s queue’s clean shutdown implementation that can wait forever to join its internal threads. If it so happens that a thread has already put everything it was told about on an interprocess pipe, the thread can be joined quickly (it’s not waiting to do anything more). But If the thread is still waiting to put data on a pipe, it can hang. The "if it so happens" part is the source of the uncertainties mentioned below.]

Exactly under which conditions isn’t defined, and may vary across platforms, Python releases, and even the history of the specific operations performed on a queue. This fuzziness is likely why the OP saw different behavior depending on whether running in debug mode.

The multiprocessing docs warn about this, but it’s often overlooked:

Bear in mind that a process that has put items in a queue will wait before terminating until all the buffered items are fed by the “feeder” thread to the underlying pipe. (The child process can call the Queue.cancel_join_thread method of the queue to avoid this behaviour.)

This means that whenever you use a queue you need to make sure that all items which have been put on the queue will eventually be removed before the process is joined. Otherwise you cannot be sure that processes which have put items on the queue will terminate.

Which doesn’t need to be understood 😉 Just take it as a fact of multiprocessing life: make sure your queues are empty when the program ends – else normal shutdown processing may hang forever.

Answered By: Tim Peters
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.