问题
I wanted to force children to call parent constructor and found this answer which seems to do the job fine. However, I'm a bit unsure if what I'm doing is safe. I have this:
# Copied from answer linked above
class meta(type):
def __init__(cls,name,bases,dct):
def auto__call__init__(self, *a, **kw):
for base in cls.__bases__:
base.__init__(self, *a, **kw)
cls.__init__child_(self, *a, **kw)
cls.__init__child_ = cls.__init__
cls.__init__ = auto__call__init__
# My code
import unittest
class TestBase(unittest.TestCase, metaclass=meta):
def __init__(self):
self.testvar="Hello, World!"
class A(TestBase):
def foo(self):
# Inherited from TestBase
print(self.testvar)
# Inherited from unittest.TestCase
self.assertEqual("Hello, World!", self.testvar)
A().foo()
This prints "Hello, World!" as expected, and I'm able to use assertEqual
from unittest.TestCase
, but I have a feeling that I might be on very thin ice here. Is this safe to do? Can it collide with unittest
in any way?
In the original post, TestBase
did not inherit from unittest.TestCase
. That's the difference.
回答1:
Here nothing's happening anyway, you only need to delegate back to the parent if you actually override the method, if you don't then the original method is not shadowed and will be called normally.
I'd strongly recommend not doing this though, especially for unittest as __init__
is not actually of any use there: unittest
only calls __init__
to initialise test cases based on the method names it finds, the initialisation hook you want is setUp
(and tearDown
though addCleanup
is usually safer and more reliable). You can define helper methods in test classes, but you should not instantiate testcases yourself.
Plus unittest
is not great, it's quite verbose, the API surface is large, and the class-based design hampers the modularity (somewhat unexpectedly maybe). I'd strongly recommend taking a gander at pytest
.
回答2:
Unittest apart (which meddles some of the rules for auto-running classes iteself), this code does a mess.
It will run the __init__
methods of the classes several times over, in a hard to determine order.
here is what happens with your code when I add an intermediate class, with an actual collaborative __init__
calling super()
(and take out unittest
for sanity):
class meta(type):
def __init__(cls,name,bases,dct):
def auto__call__init__(self, *a, **kw):
for base in cls.__bases__:
base.__init__(self, *a, **kw)
cls.__init__child_(self, *a, **kw)
cls.__init__child_ = cls.__init__
cls.__init__ = auto__call__init__
# My code
class TestBase(metaclass=meta):
def __init__(self):
print("base")
self.testvar="hello"
class Middle(TestBase):
def __init__(self):
print("middle")
super().__init__()
class A(Middle):
def foo(self):
# Inherited from TestBase
print(self.testvar)
# Inherited from unittest.TestCase
assert self.testvar == "hello"
A().foo()
Will print:
base
middle
base
base
middle
base
hello
Doing this correctly would require the metaclass to:
check if the class being created have an __init__
of itself and each __init__
in the __mro__
(not __bases__
these are only the immediate ancestors). For each __init__
method, if it is not already wrapper, wrap it (the same as applying a decorator and replace the original method), that will make it mark an special flag on the instance once it is run, and skip running if it already have been called for the instance, and (the same wrapper) - call the next __init__
in the mro at its exit.
This could ensure all __init__
in the chain are run, even if one or more of them do not feature a super()
call - but there are downsides: the superclasses __init__
would only be called at the end of the child's run - and one __init__
method would become special so it could not be manually called as an ordinary method (although that is rarely used - however, they'd become "special" in that respect).
来源:https://stackoverflow.com/questions/64753383/is-it-safe-to-use-autocall-init-in-this-way