I get it, but in general I don't get the OO hate.
It's all about the problem domain imo. I can't imagine building something like a graphics framework without some subtyping.
Unfortunately, people often use crap examples for OO. The worst is probably employee, where employee and contractor are subtypes of worker, or some other chicanery like that.
Of course in the real world a person can be both employee and contractor at the same time, can flit between those roles and many others, can temporarily park a role (e.g sabbatical) and many other permutations, all while maintaining history and even allowing for corrections of said history.
It would be hard to find any domain less suited to OO that HR records. I think these terrible examples are a primary reason for some people believing that OO is useless or worse than useless.
Most code bases don't need dynamically loaded objects designed with interfaces that can be swapped out. In fact, that functionality is nearly never useful. But that's how most people wrote Java code.
It was terrible and taught me to avoid applying for jobs that used Java.
I like OOP and often use it. But mostly just as an encapsulation of functionality, and I never use interfaces or the like.
To the point that there are people that will assert the GoF book, published before Java was invented, actually contains Java in it.
It was so rare that the GoF though they needed to write a book to teach people how to use those patterns when they eventually find them.
But after the book was published, those patterns became "advanced programming that is worth testing for in job interviews", and people started to code for their CVs. The same happened briefly with refactoring, and for much longer with unit tests and the other XP activities (like TDD).
At the same time, Java's popularity was exploding on enterprise software.
Again, Smalltalk did it first, and is actually one of the languages on the famous GoF book, used to create all the OOP patterns people complain about, the other being C++.
I think people are still too ready to use massive, hulking frameworks for every little thing, of course, but the worst of the 'enterprise' stuff seems to have been banished.
Always makes me think of that AbstractProxyFactorySomething or similar, that I saw in Keycloak, for when you want to implement your own password quality criteria. When you step back a bit and think about what you actually want to have, you realize, that actually all you want is a function, that takes as input a string, and gives as output a boolean, depending on whether the password is strong enough, or fulfills all criteria. Maybe you want to output a list of unmet criteria, if you want to make it complex. But no, it's AbstractProxyFactorySomething.
Here is a tiny interface that will do what you need:
@FunctionalInterface
public interface IPasswordChecker
{
bool isValid(String password);
}
Now you can trivially declare a lambda that implements the interface.Example:
const IPasswordChecker passwordChecker = (String password) -> password.length() >= 16; > that was actively encouraged by the design of the language.
Java hasn't changed that much since the "hellscape" 00s. Is it better now? Or what is specific to the language the encourages "the mess of DAOs and Factories"? You can make all of those same mistakes in Python, C# or C++. I have used Java for about 15 years now and I have never written any of that junky enterprise crap with a million layers of OO. > I never use interfaces or the like.
This is the first that I heard any disdain towards interfaces. What is there not to like?Interfaces for everything, abstract classes “just in case,” dependency injection frameworks that exist mainly to manage all the interfaces. Java (and often Enterprise C#) is all scaffolding built to appease the compiler and the ideology of “extensibility” before there’s any actual complexity to extend.
You can write clean, functional, concise Java today, especially with records, pattern matching, and lambdas, but the culture around the language was forged in a time when verbosity was king.
Then my templated impl header can be very heavy without killing my build times since only the interface base class is #included.
Not sure if this is as common in Java.
C++ is a hell of a language.
t = new T(); // T is a template parameter class
C++ uses reified generics which are heavy on compile time but allows the above. > C++ uses reified generics
I was a C++ programmer for many years, but I never heard this claim. I asked Google AI and it disagees. > does c++ have reified generics?
> C++ templates do not provide reified generics in the same sense as languages like C# or Java (to a limited extent). Reified generics mean that the type information of generic parameters is available and accessible at runtime.In any event, you have to use weird (I think “unsafe”) reflection tricks to get the type info back at runtime in Java. To the point where it makes you think it’s not supported by the language design but rather a clever accident that someone figured out how to abuse.
https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...
Has anyone ever actually done this ?
But if it was as convoluted to use as it's in Java, I wouldn't. And also, it's not enterprise CRUD. Enterprise CRUD resists complex architectures like nothing else.
Perhaps I'm not following, but dynamically loaded objects are the core feature of shared libraries. Among it's purposes, it allows code to be reused and even updated without having to recompile the project. That's pretty useful.
Interfaces are also very important. They allow your components to be testable and mockable. You cannot have quality software without these basic testing techniques. Also, interfaces are extremely important to allow your components to be easily replaced even at runtime.
Perhaps you haven't had the opportunity to experience the advantages of using these techniques, or were you mindful of when you benefited from them. We tend to remember the bad parts and assume the good parts are a given. But personal tastes don't refute the value and usefulness of features you never learned to appreciate.
> Perhaps I'm not following, but dynamically loaded objects are the core feature of shared libraries. Among it's purposes, it allows code to be reused and even updated without having to recompile the project. That's pretty useful.
> Interfaces are also very important. They allow your components to be testable and mockable. You cannot have quality software without these basic testing techniques. Also, interfaces are extremely important to allow your components to be easily replaced even at runtime.
I don't think GP was saying that Dynamically loaded objects are not needed, or that Interfaces are not needed.
I read it more as "Dynamically loaded interfaces that can be swapped out are not needed".
The share of all software that actually benefits from this is extremely small. Most web-style software with stateless request/response is better architected for containers and rolling deployments. Most businesses are also completely fine with a few minutes of downtime here and there. For runtime-replacement to be valuable, you need both statefulness and high SLA (99.999+%) requirements.
To be fair, there is indeed a subset of software that is both stateful and with high SLA requirements, where these techniques are useful, so it's good to know about them for those rare cases. There is some pretty compelling software underneath those Java EE servers for the few use-cases that really need them.
But those use-cases are rare.
Of course you can, wtf?
Mock are often the reason of tests being green and app not working :)
Explain then what is your alternative to unit and integration tests.
> Mock are often the reason of tests being green and app not working :)
I don't think that's a valid assumption. Tests just verify the system under test, and test doubles are there only to provide inputs in a way that isolates your system under test. If your tests either leave out invariants that are behind bugs and regressions or have invalid/insufficient inputs, the problem lies in how you created tests, not in the concept of a mock.
Workman and it's tools.
You really haven't argued anything, so ending on a "you must be personally blind jab" just looks dumb.
Java I think gets attacked this way because a lot of developers, especially in the early 2000s, were entering the industry only familiar with scripting languages they'd used for personal hobby projects, and then Java was the first time they encountered languages and projects that involved hundreds of developers. Scripting codebases didn't define interfaces or types for anything even though that limits your project scalability, unit testing was often kinda just missing or very superficial, and there was an ambient assumption that all dependencies are open source and last forever whilst the apps themselves are throwaway.
The Java ecosystem quickly evolved into the enterprise server space and came to make very different assumptions, like:
• Projects last a long time, may churn through thousands of developers over their lifetimes and are used in big mission critical use cases.
• Therefore it's better to impose some rules up front and benefit from the discipline later.
• Dependencies are rare things that create supplier risks, you purchase them at least some of the time, they exist in a competitive market, and they can be transient, e.g. your MQ vendor may go under or be outcompeted by a better one. In turn that means standardized interfaces are useful.
So the Java community focused on standardizing interfaces to big chunky dependencies like relational databases, message queuing engines, app servers and ORMs, whereas the scripting language communities just said YOLO and anyway why would you ever want more than MySQL?
Very different sets of assumptions lead to different styles of coding. And yes it means Java can seem more abstract. You don't send queries to a PostgreSQL or MySQL object, you send it to an abstract Connection which represents standardized functionality, then if you want to use DB specific features you can unwrap it to a vendor specific interface. It makes things easier to port.
I suspect many OOP haters have experienced what I'm currently experiencing, stateful objects for handing calculations that should be stateless, a confusing bag of methods that are sometimes hidden behind getters so you can't even easily tell where the computation is happening, etc
And then there's a reason they're teaching the "functional core, imperative shell" pattern.
It’s certainly possible to write good code in Java but it does still lend itself to abuse by the kind of person that treated Design Patterns as a Bible.
I have a vague idea of what the Bible says, but I have my favorite parts that I sometimes get loud about. Specifically, please think really hard before making a Singleton, and then don't do it.
Sorry to learn, hope you don't get scar tissue from it.
Most programs in my experience are about manipulating records: retrieve something from a database, manipulate it a bit (change values), update it back.
Over here OOP do a good job - you create the data structures that you need to manipulate, but create the exact interface to effect the changes in a way that respect the domain rules.
I do get that this isn't every domain out there and _no size fits all_, but I don't get the OP complaints.
I currently think that most of the anger about OOP is either related to bad practices (overusing) or to lack of knowledge from newcomers. OOP is a tool like any other and can be used wrong.
Exactly. This is the way to think about it, imo. One of those places is GUI frameworks, I think, and there I am fine doing OOP, because I don't have a better idea how to get things done, and most GUI frameworks/toolkits/whatever are designed in an OOP way anyway. Other places I just try to go functional.
OOP is a collection of ideas about how to write code. We should use those ideas when they are useful and ignore them when they are not.
But many people don't want to put in the critical thinking required to do that, so instead they hide behind the shield of "SOLIDD principles" and "best practice" to justify their bad code (not knocking on SOLIDD principles, it's just that people use it to justify making things object oriented when they shouldn't be).
As with everything, there isn't a golden rule to follow. Sometimes OO makes sense, sometimes it doesn't. I rarely use it, or abstractions in general, but there are some things where it's just the right fit.
This, this, this. So much this.
Back when I was in uni, Sun had donated basically an entire lab of those computers terminals that you used to sign in to with a smart card (I forgot the name). In exchange, the uni agreed to teach all classes related to programming in Java, and to have the professors certify in Java (never mind the fact that nobody ever used that laboratory because the lab techs had no idea how to work with those terminals).
As a result of this, every class from algorithms, to software architecture felt like like a Java cult indoctrination. One of the professors actually said C was dead because Java was clearly superior.
In our uni (around 1998/99) all professors said that except the Haskell teacher who indeed called Java a mistake (but c also).
Tale as old as time.
> Sounds like a problem with poor code rather than something unique to OOP.
And yeah, OO may lean a bit towards more indirection, but it definitely doesn't force you to write code like that. If you go through too many levels, that's entirely on the developer.
> I can't imagine building something like a graphics framework without some subtyping.
Let me introduce you to Fudgets, an I/O and GUI framework for Haskell: https://en.wikipedia.org/wiki/FudgetsThey use higher order types to implement subtyping as a library, with combinators. For example, you can take your fudget that does not (fully) implement some functionality, wrap it into another one that does (or knows how to) implement it and have a combined fudget that fully implements what you need. Much like parsing combinators.
It's the misuse of OO constructs that gives it a bad name, almost always that is inheritance being overused/misused. Encapsulation and modularity are important for larger code bases, and polymorphism is useful for making code simpler, smaller and more understandable.
Maybe the extra long names in java also don't help too, along with the overuse/forced use of patterns? At least it's not Hungarian notation.
A sample: pandas loc, iloc etc. Or Haskell scanl1. Or Scheme's cdr and car. (I know - most of the latest examples are common functions that you'll learn after a while, but still, reading it at first is terrible).
My first contact with a modern OO language was C# after years of C++. And I remember how I thought it awkward that the codebase looked like everything was spelled out. Until I realize that it is easier to read, and that's the main quality for a codebase.
> CMMetadataFormatDescriptionCreateWithMetadataFormatDescriptionAndMetadataSpecifications(allocator:sourceDescription:metadataSpecifications:formatDescriptionOut:)
https://developer.apple.com/documentation/coremedia/cmmetada...:)
Computers work on data. Every single software problem is a data problem. Learning to think about problems in a data oriented way will make you a better developer and will make many difficult problems easier to think about and to write software to solve.
In addition to that, data oriented software almost inherently runs faster because it uses the cache more efficiently.
The objects that fall out of data oriented development represent what is actually going on inside the application instead of how an observer would model it naively.
I really like data oriented development and I wish I had examples I could show, but they are all $employer’s.
Even with non-obfuscated code, if you're working with a decompilation you don't get any of the accompanying code comments or documentation. The more abstractions are present, the harder it is to understand what's going on. And, the harder it is to figure out what code changes are needed to implement your desired feature.
C++ vtables are especially annoying. You can see the dispatch, but it's really hard to find the corresponding implementation from static analysis alone. If I had to choose between "no variable names" and "no vtables", I'd pick the latter.
> Everything is dispatched dynamically
Well, not everything, there is NS_DIRECT. The reason for that being that dynamic dispatch is expensive - you have to keep a lot of metadata about it in the heap for sometimes rarely-used messages. (It's not about CPU usage.)
I think people focus a lot on inheritance but the core idea of OO is more the grouping of values and functions. Conceptually, you think about how methods transforms the data you are manipulating and that’s a useful way to think about programs.
This complexity doesn’t really disappear when you leave OO language actually. The way most complex Ocaml programs are structured with modules grouping one main type and the functions working on it is in a lot of way inspired by OO.
Encapsulation.
Which I think is misunderstood a lot, both by practitioners and critics.
Also, I dislike design patterns overuse, DDD done Uncle Bob style.
Also we can think of where OOP drives many teams to:
https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...
https://factoryfactoryfactory.net/
https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...
> https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpriseEdition
This! Everytime I see this project, I laugh out loud. The description reads: > FizzBuzz Enterprise Edition is a no-nonsense implementation of FizzBuzz made by serious businessmen for serious business purposes.
I mean come on, these guys are serious!While React technically uses some OOP, in practice it's a pretty non-OOP way do UI. Same with e.g. ImGUI (C++), Clay (C). I suppose for the React case there's still an OOP thing called the DOM underneath, but that's pretty abstracted.
In practice most of the useful parts of OOP can be done with a "bag/record of functions". (Though not all. OCaml has some interesting stuff wrt. the FP+OOP combo which hasn't been done elsewhere, but that may just be because it wasn't ultimately all that useful.)
Function calls have state, in React. Think about that for a second! It totally breaks the most basic parts of programming theory taught in day one of any coding class. The resulting concepts map pretty closely:
• React function -> instantiate or access a previously instantiated object.
• useState -> define an object field
• Code inside the function: constructor logic
• Return value: effectively a getResult() style method
The difference is that the underlying stateful objects implemented in OOP using inheritance (check out the blink code) is covered up with the vdom diffing. It's a very complicated and indirect way to do a bunch of method calls on stateful objects.
The React model doesn't work for a lot of things. I just Googled [react editor component] and the first hit is https://primereact.org/editor/ which appears to be an ultra-thin wrapper around a library called Quill. Quill isn't a React component, it's a completely conventional OOP library. That's because modelling a rich text editor as a React component would be weird and awkward. The data structures used for the model aren't ideal for direct modification or exposure. You really need the encapsulation provided by objects with properties and methods.
Welcome to Node.js v24.10.0.
Type ".help" for more information.
> const fn = (x) => x + x
undefined
> typeof(fn)
'function'
> Object.getOwnPropertyNames(fn)
[ 'length', 'name' ]
> fn.name
'fn'
> fn.length
1
> Object.getPrototypeOf(fn)
[Function (anonymous)] ObjectUnfortunately there were so many bad examples from the old Java "every thing needs a dozen factories and thousands of interfaces" days that most people haven't seen the cases where it works well.
The keyword being "some".
Yes, there are those who can use OOP responsibly, but in my (fortunately short) experience with Enterprise Java, they are outnumbered by the cargo-cult dogma of architecture astronauts who advocate a "more is better" approach to abstraction and design patterns. That's how you end up with things like AbstractSingletonProxyFactoryBean.
The devs also wrote a write-up here about how they handle the desyncs in netcode [1].
[1] https://medium.com/project-slippi/fighting-desyncs-in-melee-...
The game used to be simple, both conceptually and codewise but obviously, it became more and more bloated the more developers touched it and the more bureaucracy was added. Now, it's a complete nightmare, and I bet it's also a nightmare for the developers too, considering how hard it is for them to fix even basic issues which have been in the game for like a decade at this point.
You have to have Factories and inheritence..
/s