Why is __del__ not called on a global object when another thread is active?
Question:
I am working on a project that has a similar construct to that shown in the code below. My hope is to have an object that opens a thread upon creation and automatically closes it when the object is destroyed. This works as expected when the object is instantiated within a function, but when the object is created in the global scope, __del__
is not being called, causing the program to hang.
import threading
def thread(event):
event.wait()
print("Terminating thread")
class ThreadManager:
def __init__(self):
self.event = threading.Event()
self.thread = threading.Thread(target=thread, args=(self.event,))
self.thread.start()
def __del__(self):
self.event.set()
print("Event set")
self.thread.join()
if __name__ == '__main__':
print("Creating thread")
manager = ThreadManager()
#del manager
Unless I explicitly delete the manager
object, the program hangs. I presume that the interpreter is waiting to delete the global objects until all non-daemon threads have completed, causing a deadlock situation.
My question is either, can someone confirm this and offer a workaround (I have read this page, so I’m not looking for a solution using a close()
function or something similar, I am simply curious to hear ideas of an alternative that would perform automatic clean-up), or else refute it and tell me what I’m doing wrong?
Answers:
Unlike languages like C++, Python doesn’t destroy objects as soon as they go out of scope and that’s why __del__
is unreliable. You can read more about that here: https://stackoverflow.com/a/35489349/5971137
As for a solution, I think this is the perfect case for a context manager (with
):
>>> import threading
>>>
>>> def thread(event):
... event.wait()
... print("Terminating thread")
...
>>> class ThreadManager:
... def __init__(self):
... print("__init__ executed")
... self.event = threading.Event()
... self.thread = threading.Thread(target=thread, args=(self.event,))
... self.thread.start()
... def __enter__(self):
... print("__enter__ executed")
... return self
... def __exit__(self, *args):
... print("__exit__ executed")
... self.event.set()
... print("Event set")
... self.thread.join()
...
>>> with ThreadManager() as thread_manager:
... print(f"Within context manager, using {thread_manager}")
...
__init__ executed
__enter__ executed
Within context manager, using <__main__.ThreadManager object at 0x1049666d8>
__exit__ executed
Event set
Terminating thread
The print
statements show the execution order works the way you want it to, where __exit__
is now your reliable cleanup method.
This behavior is actually specifically mentioned in the official documentation for object.__del__()
:
It is not guaranteed that __del__() methods are called for objects
that still exist when the interpreter exits.
Use:
self.thread = threading.Thread(target=thread, args=(self.event,), daemon=True)
This won’t block the exit of the process.
I am working on a project that has a similar construct to that shown in the code below. My hope is to have an object that opens a thread upon creation and automatically closes it when the object is destroyed. This works as expected when the object is instantiated within a function, but when the object is created in the global scope, __del__
is not being called, causing the program to hang.
import threading
def thread(event):
event.wait()
print("Terminating thread")
class ThreadManager:
def __init__(self):
self.event = threading.Event()
self.thread = threading.Thread(target=thread, args=(self.event,))
self.thread.start()
def __del__(self):
self.event.set()
print("Event set")
self.thread.join()
if __name__ == '__main__':
print("Creating thread")
manager = ThreadManager()
#del manager
Unless I explicitly delete the manager
object, the program hangs. I presume that the interpreter is waiting to delete the global objects until all non-daemon threads have completed, causing a deadlock situation.
My question is either, can someone confirm this and offer a workaround (I have read this page, so I’m not looking for a solution using a close()
function or something similar, I am simply curious to hear ideas of an alternative that would perform automatic clean-up), or else refute it and tell me what I’m doing wrong?
Unlike languages like C++, Python doesn’t destroy objects as soon as they go out of scope and that’s why __del__
is unreliable. You can read more about that here: https://stackoverflow.com/a/35489349/5971137
As for a solution, I think this is the perfect case for a context manager (with
):
>>> import threading
>>>
>>> def thread(event):
... event.wait()
... print("Terminating thread")
...
>>> class ThreadManager:
... def __init__(self):
... print("__init__ executed")
... self.event = threading.Event()
... self.thread = threading.Thread(target=thread, args=(self.event,))
... self.thread.start()
... def __enter__(self):
... print("__enter__ executed")
... return self
... def __exit__(self, *args):
... print("__exit__ executed")
... self.event.set()
... print("Event set")
... self.thread.join()
...
>>> with ThreadManager() as thread_manager:
... print(f"Within context manager, using {thread_manager}")
...
__init__ executed
__enter__ executed
Within context manager, using <__main__.ThreadManager object at 0x1049666d8>
__exit__ executed
Event set
Terminating thread
The print
statements show the execution order works the way you want it to, where __exit__
is now your reliable cleanup method.
This behavior is actually specifically mentioned in the official documentation for object.__del__()
:
It is not guaranteed that __del__() methods are called for objects
that still exist when the interpreter exits.
Use:
self.thread = threading.Thread(target=thread, args=(self.event,), daemon=True)
This won’t block the exit of the process.