This also makes life directly easier for me as a programmer, because I know in what code files I have to look to understand the behavior of that object.
Even linters use it to that purpose, e.g. resolving call sites by looking at the last isinstance() statement to determine the type.
__subclasshook__ puts this at risk by letting a class lie about its instances.
As an example, consider this class:
class Everything(ABC):
@classmethod
def __subclasshook__(cls, C):
return True
def foo(self):
...
You can now write code like this: if isinstance(x, Everything):
x.foo()
A linter would pass this code without warnings, because it assumes that the if block is only entered if x is in fact an instance of Everything and therefore has the foo() method.But what really happens is that the block is entered for any kind of object, and objects that don't happen to have a foo() method will throw an exception.
It essentially allows the user to check if a class implements an interface, without explicitly inheriting ABC or Protocol. It’s up to the user to ensure the body of the case doesn’t reference any methods or attributes not guaranteed by the subclass hook, but that’s not necessarily bad, just less safe.
All things have a place and time.
Protocols don't need to be explicit superclasses for compile time checks, or for runtime checks if they opt-in with @runtime_checkable, but Protocols are also much newer than __subclass_hook__.
(I love being wrong on HN, always learn something)
> check if a class implements an interface, without explicitly inheriting ABC or Protocol
This really doesn't sound like a feature that belongs in the language. Go do something custom if you really want it.
I do think there is a ton of indirection going on in the code that I would not immediately think to look for. As the post stated, could be a good reason for this in some things. But it would be the opposite of aiming for boring code, at that point.
Some of these examples are similar in effect to what you might do in other languages, where you define an 'interface' and then you check to see if this class follows that interface. For example, you could define an interface DistancePoint which has the fields x and y and a distance() method, and then say "If this object implements this interface, then go ahead and do X".
Other examples, though, are more along the lines of if you implemented an interface but instead of the interface constraints being 'this class has this method' the interface constraints are 'today is Tuesday'. That's an asinine concept, which is what makes this crimes and also hilarious.
I don't find using __subclasshook__ to implement structural subtyping that you can't express with Protocols/ABCs alone to be that much of a crime. You can do evil with it but I can perform evil with any language feature.
Conforming to an interface is a widely accepted concept across many popular languages. __subclasshook__ magic is not. So there is a big difference in violating the principle of least surprise.
That said, I'd be curious to hear a legitimate example of using it to implement "structural subtyping that you can't express with Protocols/ABCs alone".
ABCs with __subclasshook__ have been available since Python 2.6, providing a mechanism to inplement runtime-testable structural subtyping. Protocols and @runtime_checkable, which provide typechecking-time structural subtyping (Protocols) that can also be available at runtime (with @runtime_checkable) were added in Python 3.8, roughly 11 years later.
There may not be much reason to use __subclasshook__ in new code, but there's a pretty good reason it exists.
That's quite a different claim, and makes a lot of sense. Thanks for the history!
My best guess is that it adds complexity and makes code harder to read in a goto-style way where you can't reason locally about local things, but it feels like the author has a much more negative view ("crimes", "god no", "dark beating heart", the elmo gif).