As a life-long developer in OOP languages (C++, Java, C#, among others) I still think OOP is quite good when used with discipline. And it pains me that there is so much misunderstood hate towards it nowdays.
Most often novice programmers try to abuse the inheritence for inpropper avoiding of duplicate code, and write themself into a horrible sphagetti of dependencies. So having a good base or design beforehand helps a lot. But building the code out of logical units with fenced responisbilities is in my opinion a good way to structure code.
Currently I’m doing a (hobby) project in Rust to get some feeling for it. And I have a hard time to wrap my mind around some design choices in the language that would have been very easily solved with a more OOP like structure. Without sacrificing the safety guarantees. But I think they’ve deliberatly avoided going in that direction. Ofcourse, my understanding of Rust is far from complete so it is probably that I missed some nuance… But still I wonder. It is a good learning experience though, a new way to look at things.
The article was not very readable on mobile for me but the examples seemed a bit contrived…
I think a lot of people conflate OOP and inheritance to mean the same thing. And inheritance is what should get the bad rap. It does not solve any problem I have seen any better than other language features (in particular interfaces/traits can solve a lot of the same problems) but inheritance causes far more problems overall.
But building the code out of logical units with fenced responisbilities is in my opinion a good way to structure code.
This is encapsulation, which is one of the better ideas from OOP languages. Though also not unique to them.
And I have a hard time to wrap my mind around some design choices in the language that would have been very easily solved with a more OOP like structure.
What design choices would those be? And how would they better fit into an OOP like structure? Note that rust is not anti OOP - it uses OOP techniques a lot throughout the code base. It just lack inheritance and replaces that with other IMO better features.
Curious to hear what in Rust could be more easily solved with OOP! I think one reason for rust not using OOP is because they want to minimize dynamic dispatch and keep it explicit where it happens, because it’s a language that gives you very fine grained control of resource usage, kinda similar to how you have to be explicit about copying for most types. Most trait calls are static dispatch unless you have a
Box::<dyn SomeTrait>
IMO rust does use OOP styles all over the place. OOP does not mean dynamic dispatch or inheritance - that is just what older popular languages used. Note that static dispatch and monomorphization (where the compiler generates a method for each type at compile time rather than using a runtime lookup) give you a lot of the benefits of dynamic dispatch at the cost of binary size in favor of runtime checks and performance.
And other aspects of OOP, like encapsulation, data abstractions and polymorphism are easily achievable in rust and often are. Just look at any object from the std library - they are all essentially written in an OOP style. Such as Vec or File - hidden internal state with traits that you can swap them around with other parts that make sense. The only thing it really likes - like Go lang (which claims to be an OOP style language) is inheritance.
And the only reason rust is not seen as or describes itself as an OOP language is because it does not force the OOP style on you. But instead lets you program in more of a functional or procedural style if you want to. You can pick the best methodology to solve the problem you have at hand rather than trying to fir everything into a single style.
But that does not make it bad at OOP style.
I’m using “OOP” more in the sense that is described in the article, but that is a fair perspective on rust and OOP. It is a term with a lot of different interpretations after all.
OOP is great, and can be much simpler than what you’ve seen.
It’s Java culture that’s terrible. The word “Factory” is a code smell all on its own.
Just like any tool, don’t overuse it. Don’t force extra layers of abstraction.
It’s enterprise design that sucks, often it’s written in Java - Hello World
I’ve realized that Factories are actually kind of fine, in particular when contexualized as being the equivalent of partials from the world of functionals.
I have never seen them used well. I expect there IS some use case out there where it makes sense but I haven’t seen it yet. So many times I’ve seen factories that can only return one type. So why did you use a factory? And a factory that returns more than one type is 50/50 to be scary.
Yeah, I went through the whole shape examples thing in school. The OOP I was taught in school was bullshit.
Make it simpler. Organizing things into classes is absolutely fine. Seven layers of abstraction is typically not fine.
Consider the following: You have a class A that has a few dependencies it needs. The dependencies B and C never change, but D will generally be different for each time the class needs to be used. You also happen to be using dependency injection in this case. You could either:
- Inject the dependencies B and C for any call site where you need an instance of A and have a given D, or
- Create an AFactory, which depends on B and C, having a method create with a parameter D returning A, and then inject that for all call sites where you have a given D.
This is a stripped example, but one I personally have both seen and productively used frequently at work.
In this case the AFactory could practically be renamed PartialA and be functionally the same thing.
You could also imagine a factory that returns different implementations of a given interface based on either static (B and C in the previous example) or dynamic dependencies (D in the previous example).
Sounds easy to simplify:
Use one of: constructor
A(d)
, functiona(d)
, or methodd.a()
to construct A’s.B and C never change, so I invoke YAGNI and hardcode them in this one and only place, abstracting them away entirely.
No factories, no dependency injection frameworks.
Now B and C cannot be replaced for the purposes of testing the component in isolation, though. The hardcoded dependency just increased the testing complexity by a factor of B * C.
That’s changing the goal posts to “not static”
IMO factory functions are totally fine – I hesitate to even give them a special name b/c functions that can return an object are not special.
However I think good use cases for Factory classes (and long-lived stateful instances of) are scarce, often being better served using other constructs.
I always call my little helper higher order functions (intended to be partially applied) factories :)
Why is the word factory a code smell?
In this post I use the word “OOP” to mean programming in statically-typed language
So Smalltalk is not object-oriented. Someone tell Alan Kay.
OOP definitely doesn’t get to claim static types for only itself either. Fuck that.
They don’t only say static types. They add classes, inheritance, subtyping, and virtual calls. Mind you, the difference between the last 3 is quite subtle.
So, since I’ve started nit-picking, Self is also OO and has prototype-based inheritance (as does javascript, but I’m not sure I’d want to defend the claim that javascript is an OO language).
Mainstream statically-typed OOP allows straightforward backwards compatible evolution of types, while keeping them easy to compose. I consider this to be one of the killer features of mainstream statically-typed OOP, and I believe it is an essential feature for programming with many people, over long periods of time.
I 100% agree with this. The strength of OOP comes with maintaining large programs over a long time. Usually with ever changing requirements.
This is something that’s difficult to demonstrate with small toy examples, which gives OOP languages an unfair disadvantage. Yeah, it might be slower. Yeah, there might be more boilerplate to write. But how does the alternative solutions compare with regards to maintainability?
The main problem with OOP is that maintainability doesn’t necessarily come naturally. It requires lots of experience and discipline to get it right. It’s easy to paint yourself in the corner if you don’t know what you’re doing.
But how does the alternative solutions compare with regards to maintainability?
Which alternative solutions are you thinking of, and have you tried them?
Rust has been mentioned several times in the thread already, but Go also prohibits “standard” OOP in the sense that structs don’t have inheritance. So have you used either Rust or Go on a large project?
This is something often repeated by OOP people but that doesn’t actually hold up in practice. Maintainability comes from true separation of concerns, which OOP is really bad at because it encourages implicit, invisible, stateful manipulation across disparate parts of a codebase.
I work on a Haskell codebase in production of half a million lines of Haskell supported by 11 developers including myself, and the codebase is rapidly expanding with new features. This would be incredibly difficult in an OOP language. It’s very challenging to read unfamiliar code in an OOP language and quickly understand what it’s doing; there’s so much implicit behavior that you have to track down before any of it makes sense. It is far, far easier to reason about a program when the bulk of it is comprised of pure functions taking in some input and producing some output. There’s a reason that pure functions are the textbook example of testable code, and that reason is because they are much easier to understand. Code that’s easier to understand is code that’s easier to maintain.
This is something that’s difficult to demonstrate with small toy examples, which gives OOP languages an unfair disadvantage.
This is well said. It’s such a frustrating meme when I see people talk about how many lines a “hello world” application needs as if that’s the benchmark of what makes a language good.
Thanks. I hate it.
I’m going to spend this thread agreeing with Rust fans, and I hate that the most. (I love you all, really, but you’re fun to try to wind up.)
OOP sucks because Inheritance sucks. This person’s brief non-shitty experience doesn’t change that.
Languages built around OOP suck where they use inheritance instead of interfaces.
Inheritance is isn’t always a terrible choice. But it is a terrible choice often enough that we need to warn the next generation.
If we’re looking at it from a Rust angle anyway, I think there’s a second reason that OOP often becomes messy, but less so in Rust: Unlimited interior mutability. Rust’s borrow checker may be annoying at times, but it forces you to think about ownership and prevents you from stuffing statefulness where it shouldn’t be.
To be fair, that’s an issue in almost every imperative language and even some functional languages. Rust, C, and C++ are the only imperative languages I know of that make a serious effort to restrict mutability.
How do C and C++ try to restrict mutability?
const
They don’t do it well, but an attempt was made.
I also love this. I don’t why but gc languages feel so incomplete to me. I like to know where the data is that I have. In c/c++ I know if I am passing data or pointer, in rust I know if it’s a reference or the data itself. Idk, allows me to think better
Rust’s borrow checker may be annoying at times, but it forces you to think about ownership and prevents you from stuffing statefulness where it shouldn’t be.
That does sound pretty cool.
Is there any reason an OO language couldn’t have a borrow checker? Sure, it would be wildly more complex to implement but I don’t see how it’s impossible.
OO languages typically use garbage collector. The main purpose of the borrow checker is to resolve the ambiguity of who is responsible for deallocating the data.
In GC languages, there’s usually no such ambiguity. The GC takes care of it.
usually
I would argue that rust has a very strong OO feature set (it just lacks inheritance which is the worst OO feature IMO). It is not seen as an OOP language as it also has a very strong functional and procedural feature set as well and does not favor one over the other letting you code in any style that best fits the problem you have.
So I would not say OO and a borrow checker are impossible or even hard. What makes less sense is a GC and the borrow checker. Though there are some use cases for having a GC for a subset of a program.
Inheritance is isn’t always a terrible choice. But it is a terrible choice often enough that we need to warn the next generation.
But also, when it is not a terrible choice for a problem it is often not the best choice or at the very least equally good as other options that work in vastly more cases.
ultra rare I’ve successfully inherited a concrete class, rarely an abstract one and 99% just impl an interface.
That’s a take I can agree with. My experience is that composition solves much of what inheritance is intended to solve and ends up being a more maintainable solution a majority of the time.
Inheritance, which allows classes to reuse state and methods of other classes.
This is the absolute worst feature of typical OOP languages. I don’t know of any case where it is the best way to solve a problem and very often becomes a nightmare if you don’t get the exact hierarchy of types right. It becomes a nightmare once you have something that does not quite fit into the assumptions you original made when you started out. Which happens all the time.
The examples given with the logger can be solved just as well if not better with interfaces/traits with solutions that don’t have that problem.
Composition is far better and immensely more flexible than inheritance. Extracting duplicate code into helper classes or static functions is a good option.
Conformance to interfaces or protocols with default implementations is a great alternative as well.
I like OOP more than other styles, it’s just often badly done. Complex inheritance, huge classes that do too much, overuse of factories and similar patterns, can ruin it.
I do not agree. Very often, when using libraries for example, you need some extra custom handling on types and data. So the easy way is to inherit and extend to a custom type while keeping the original functionality intact. The alternative is to place the new functionality in some unrelated place or create non-obvious related methods somewhere else. Which makes everything unnecessary complex.
And I think the trait system (in Rust for example) creates so much duplicate or boilerplate code. And in Rust this is then solved by an even more complex macro system. But my Rust knowledge might just nog be mature enough, feel free to correct me if I’m wrong…
So the easy way is to inherit and extend to a custom type while keeping the original functionality intact.
You can do this with traits and interfaces in rust/go. You can add any methods you want onto existing types. Which IMO is better. No need to subclass, in just just create a new trait, implement it on the type you want and you have new behavior attached to that type without needing to convert the existing thing you got from something into a new type.
And I think the trait system (in Rust for example) creates so much duplicate or boilerplate code.
It really does not. You can have implementation on traits that don’t need to be re-implemented on every type - like the Iterator - it provides 76 methods of which you need to implement only 1 for new types. You can implement others for custom behavior which is great for specialization (aka using a more efficient implementation for types that have more info, like calling
skip
on an array which knows all its elements vs the default which needs to call next n times).But it creates a vastly more flexible system. Take a very basic example - read/writing to something. How do you model that with inheritance? Basically you cannot. Not without painting yourself into a corner eventually. For instance, you can read/write to a file, to a network socket, to stdin/stdout but each of these is very different. Stdin for instance cannot be written to and Stdout cannot be read from. You might want to have a buffered reader/writer as well that wraps these types making read operation cheaper.
You cannot put these into a inheritance tree. Everything either needs to inherit from the same generic base that can both read/write and probably also close. But then for some types you need to implement these methods that don’t make sense that do what? Nothing when called? or throw an exception? It is a poor way to model this behavior.
Read and Write are orthogonal ideas - they have nothing to do with each other except they might be useful on some of the same types. With interfaces/traits you are free to separate these and implement them on whichever types make sense for them.
I have not yet seen a problem that is solvable with inheritance that cannot be better solved with some other language feature in a better way. It sort of works for some things, but other solutions also work at least equally well. Which leave it in a state where what is the point of it? If it is not solving things better then other solutions we have these days?
Yeah inheritance isn’t always the best solution. But even Java, the much maligned example for this, doesn’t do it for the i/o example you give.
In my experience, as a 25-year developer in mostly OOP languages and frameworks, is that people who attack OOP usually don’t really understand it and its usefulness.
And to be fair as it relates to attacking languages or language concepts, I attacked JavaScript without fully understanding it, many years ago. I now understand it more than I ever have in the past and it has some good qualities.
So these days it’s no longer the languages or language concepts I take issue with (though I’ll joke about JavaScript from time to time). It’s the developers who misuse or misunderstand the languages or concepts that irk me. And especially the developers who think being lazy is a virtue.
My issue isn’t any particular language but the advocates of various languages treating their language as the best hammer for every nail.
Damn good point! I feel the same way about CEOs as of late and how they think AI is going to solve everything, even problems they invent just to say they’re using AI.