Of course these days with LTO the whole performance space is somewhat blurred since de-virtualisation can happen across whole applications at link time, and so the presumed performance cost can disappear (even if it wasn't actually a performance issue in reality). It's tough to create hard and fast rules in this case.
It's a much pleasurable and easier way to work, for me at least.
Trying to follow the flow through gazillion of objects with state changing everywhere is a nightmare and I rather not return to that.
public record Thing()
{
private string _state = "Initial";
public Thing Change() => this with { _state = "Changed" };
}This is why hierarchies should have limited depth. I'd argue some amount of "co-recursion" is to be expected: after all the point of the child class is to reuse logic of the parent but to overwrite some logic.
But if the lineage goes too deep, it becomes hard to follow.
> every time you modify a class, you must review the inner implementation of all other classes in the hierarchy, and call paths to ensure your change is safe.
I'd say this is a fact of life for all pieces of code which are reused more than once. This is another reason why low coupling high cohesion is so important: if the parent method does one thing and does it well, when it needs to be changed, it probably needs to be changed for all child classes. If not, then the question arises why they're all using that same piece of code, and if this refactor shouldn't include breaking that apart into separate methods.
This problem also becomes less pressing if the test pyramid is followed properly, because that parent method should be tested in the integration tests too.
That's the point: You can reuse code without paying that price of inheritance. You DON'T have to expect co-recursion or shared state just for "code-reuse".
And, I think, is the key point: Behavior inheritance is NOT a good technique for code-reuse... Type-inheritance, however, IS good for abstraction, for defining boundaries, to enable polymorphism.
> I'd say this is a fact of life for all pieces of code which are reused more than once
But you want to minimize that complexity. If you call a pure function, you know it only depends on its arguments... done. If you can a method on a mutable object, you have to read its implementation line-by-line, you have to navigate a web of possibly polymorphic calls which may even modify shared state.
> This is another reason why low coupling high cohesion is so important
exactly. Now, I would phrase it the other way around though: "... low coupling high cohesion is so important..." that's the reason why using inheritance of implementation for code-reuse is often a bad idea.
I actually can't imagine for the life of me why I'm defending OOP implementation hierarchies here- I guess I got so used to them at work, I've changed my strategy from opposing them to "it's okay as long as you use them sparingly". I have found that argument to do a lot better with my colleagues...
The same pinball of method calls happens at almost exactly the same way with composition.
You save some idiosyncrasies around the meaning of the object pointer, and that's all.
If Outer extends Inner, though, you can't tell whether `foo()` refers to Inner::foo or Outer::foo without checking to see whether Outer overrides foo or not. And the number of places you have to check scales linearly with the depth of the inheritance hierarchy.
If object A calls a method of object B (composition), then B cannot call back on B, and neither A nor B can override any behavior of the other (And this is the original core tenet of OO: being all about "message-passing").
Of course they can accept and pass other objects/functions are arguments, but that would be explicit and specific, without having to expose the whole state/impl to each other.
What if you are actually dealing with state and control-flow complexity. I'm curious what would be the "ideal" way to do this in your view. I am trying to implement a navigation system stripping interface design and all the application logic, even at this level it can get pretty complicated.
Closer to the "ideal": declarative approaches, pure functions, data-oriented pipelines, logic programming.
On the flip side, if the author didn't want to let me do that, I really appreciate having the ability to do it anyways, even if it means tighter coupling for that one part.
With interface-inheritance, each method is providing two interfaces with one single possible usage pattern: to be called by client code, but implemented by a subclass.
With implementation-inheritance, suddenly, you have any of the following possibilities for how a given method is meant to be used:
(a) called by client code, implemented by subclass (as with interface-inheritance) (b) called by client code, implemented by superclass (e.g.: template method) (c) called by subclass, implemented by superclass (e.g.: utility methods) (d) called by superclass, implemented by subclass (e.g.: template's helper methods)
And these cases inevitably bleed into each other. For example, default methods mix (a) and (b), and mixins frequently combine (c) and (b).
Because of the added complexity, you have to carefully design the relationship between the superclass, the subclass, and the client code, making sure to correctly identify which methods should have what visibility (if your language even allows for that level of granularity!). You must carefully document which methods are intended for overriding and which are intended for use by whom.
But the code structure itself in no way documents that complexity. (If we want to talk SOLID, it flies in the face of the Interface Segregation Principle). All these relationships get implicitly crammed into one class that might be better expressed explicitly. Split out the subclassing interface from the superclass and inject it so it can be delegated to -- that's basically what implementation-inheritance is syntactic sugar for anyway and now the complexity can be seen clearly laid out (and maybe mitigated with refactoring).
There is a trade-off in verbosity to be sure, especially at the call site where you might have to explicitly compose objects, but when considering the system complexity as a whole I think it's rarely worth it when composition and a tiny factory function provides the same external benefit without the headache.
These are powerful tools, if used with discipline. But especially in application code interfaces change often and are rarely well-documented. It seems inevitable that if the tool is made available, it will eventually be used to get around some design problem that would have required a more in-depth refactor otherwise -- a refactor more costly in the short-term but resulting in more maintainable code.
That is: an instance of a subclass calls a method defined on a parent class, which in turn may call a method that's been overridden by the subclass (or even another sub-subclass in the hierarchy) and that one in turn may call another parent method, and so on. It can easily become a pinball of calls around the hierarchy.
Add to that the fact that "objects" have state, and each class in the hierarchy may add more state, and modify state declared on parents. Perfect combinatory explosion of state and control-flow complexity.
I've seen this scenario way too many times in projects, and worse thing is: many developers think it's fine... and are even proud of navigating such a mess. Heck, many popular "frameworks" encourage this.
Basically: every time you modify a class, you must review the inner implementation of all other classes in the hierarchy, and call paths to ensure your change is safe. That's a horrendous way to write software, against the most basic principles of modularity and low coupling.