From the python docs:
It is not guaranteed that
__del__()
methods are called for objects that still exist when the interpreter exits.
I don't think this is because doing the deletions would cause problems. It's more that the Python philosophy is not to encourage developers to rely on the use of object deletion, because the timing of these deletions cannot be predicted - it is up to the garbage collector when it occurs.
If the garbage collector may defer deleting unused objects for an unknown amount of time after they go out of scope, then relying on side effects that happen during the object deletion is not a very robust or deterministic strategy. RAII is not the Python way. Instead Python code handles cleanup using context managers, decorators, and the like.
Worse, in complicated situations, such as with object cycles, the garbage collector might not ever detect that objects can be deleted. This situation has improved as Python has matured. But because of exceptions to the expected GC behaviour like this, it is unwise for Python developers to rely on object deletion.
I speculate that interpreter exit is another complicated situation where the Python devs, especially for older versions of Python, were not completely strict about making sure the GC delete ran on all objects.
I'm not convinced by the previous answers here.
Firstly note that the example given does not prevent __del__
methods being called during exit. In fact, the current CPythons will call the __del__
method given, twice in the case of Python 2.7 and once in the case of Python 3.4. So this can't be the "killer example" which shows why the guarantee is not made.
I think the statement in the docs is not motivated by a design principle that calling the destructors would be bad. Not least because it seems that in CPython 3.4 and up they are always called as you would expect and this caveat seems to be moot.
Instead I think the statement simply reflects the fact that the CPython implementation has sometimes not called all destructors on exit (presumably for ease of implementation reasons).
The situation seems to be that CPython 3.4 and 3.5 do always call all destructors on interpreter exit.
CPython 2.7 by contrast does not always do this. Certainly __del__
methods are usually not called on objects which have cyclic references, because those objects cannot be deleted if they have a __del__
method. The garbage collector won't collect them. While the objects do disappear when the interpreter exits (of course) they are not finalized and so their __del__
methods are never called. This is no longer true in Python 3.4 after the implementation of PEP 442.
However, it seems that Python 2.7 also does not finalize objects that have cyclic references, even if they have no destructors, if they only become unreachable during the interpreter exit.
Presumably this behaviour is sufficiently particular and difficult to explain that it is best expressed simply by a generic disclaimer - as the docs do.
Here's an example:
class Foo(object):
def __init__(self):
print("Foo init running")
def __del__(self):
print("Destructor Foo")
class Bar(object):
def __init__(self):
print("Bar1 init running")
self.bar = self
self.foo = Foo()
b = Bar()
# del b
With the del b
commented out, the destructor in Foo
is not called in Python 2.7 though it is in Python 3.4.
With the del b
added, then the destructor is called (at interpreter exit) in both cases.
One example where the destructor is not called is, if you exit inside a method. Have a look at this example:
class Foo(object):
def __init__(self):
print("Foo init running")
def __del__(self):
print("Destructor Foo")
class Bar(object):
def __init__(self):
print("Bar1 init running")
self.bar = self
self.foo = Foo()
def __del__(self):
print("Destructor Bar")
def stop(self):
del self.foo
del self
exit(1)
b = Bar()
b.stop()
The output is:
Bar1 init running
Foo init running
Destructor Foo
As we destruct foo explicitly, the destructor is called, but not the destructor of bar!
And, if we do not delete foo explicitly, it is also not destructed properly:
class Foo(object):
def __init__(self):
print("Foo init running")
def __del__(self):
print("Destructor Foo")
class Bar(object):
def __init__(self):
print("Bar1 init running")
self.bar = self
self.foo = Foo()
def __del__(self):
print("Destructor Bar")
def stop(self):
exit(1)
b = Bar()
b.stop()
Output:
Bar1 init running
Foo init running
If you did some nasty things, you could find yourself with an undeletable object which python would try to delete forever:
class Phoenix(object):
def __del__(self):
print "Deleting an Oops"
global a
a = self
a = Phoenix()
Relying on __del__
isn't great in any event as python doesn't guarantee when an object will be deleted (especially objects with cyclic references). That said, perhaps turning your class into a context manager is a better solution ... Then you can guarantee that cleanup code is called even in the case of an exception, etc...
Likely because most of programmers would assume that destructors should only be called on dead (already unreachable) objects, and here on exit we would invoke them on live objects.
If it the developer has not been expecting a destructor call on the live object, some nasty UB may result. At least, something must be done to force-close the application after time out if it hangs. But then some destructors may not be called.
Java Runtime.runFinalizersOnExit has been deprecated because of the same reason.