In most fields, serious discourse requires addressing the existing literature. Software is no different. To consider how we should approach application architecture, in both code and concept, we first have to describe the present. And to describe the present, we have to address Object-Oriented Programming.
In the past decade, criticism of Object-Oriented Programming has become increasingly common. To a casual observer following discussion on Reddit or Twitter, it might appear that OOP has already been discarded. The rise of Rust, the dominance of the TypeScript and React ecosystem, the backlash against Clean Code and Robert Martin, and a broader turn toward modularity over ontology and reusability all suggest a marked shift.
Yet the evidence points in a different direction. In 2025, Object-Oriented Programming remains the prevailing architecture for most software systems.[1][2] Even within React, a framework built around function composition, object-oriented patterns continue to appear. In my own recent work, two of the last three React codebases employed OOP designs. On the backend, frameworks such as Spring, .NET, Laravel, Rails, and many Node ecosystems rely heavily on OOP conventions. The same is true in video game development. Far from disappearing, OOP continues to define the mainstream of software practice.
Many defenders of OOP assume that its critics have not worked with it extensively or do not appreciate its advantages. This is often true. People are inclined to criticize whatever approach dominates a field, even when competing approaches exhibit the same limitations. Yet dominance itself carries weight: the prevailing paradigm attracts the most productivity, human effort, and iterative refinement - advantages that alternatives rarely receive. Even criticisms to OOP provide rhetorical space to sharpen its paradigm.
For many developers, it is difficult to imagine a backend API without drawing on object-oriented concepts such as ORMs, inversion of control, root aggregates, polymorphic inheretence, classes, and abstract factories. Data architecture is largely conceived within an OOP frame of reference. Even in user-interface development, patterns such as MVC and MVVM presuppose some degree of object-orientation.
This level of popularity creates a feedback loop. Alternative approaches are not fairly evaluated because they lack the maturity that only widespread use can produce, and they remain underdeveloped as a result. The outcome is a kind of monoculture, particularly in business applications, which limits our perspective on the wider space of software architecture.
For credibility and framing, it is important we demonstrate a clear understanding of OOP and the advantages it provides. Only by first establishing those advantages can we then examine the trade-offs they entail.
The natural analytical entrypoint of OOP is the class. Although classes concentrate nearly all of OOP’s defining features, their basic functionality is easy to gloss over. At their core, class objects resemble state machines: they couple behavior and data. “Pure data” without behavior is a struct, while “pure behavior” without data is a collection of functions. It is the union of the two that gives classes their meaning. The behavior of a class is varied by the state of its data.
This idea is powerful, and easy to take for granted. Even in languages that don’t provide classes directly, the same pattern emerges. Rust, for example, defines data through structs, but still provides a way to bind functions to those structs so they can be called in the familiar object-oriented style:
struct User {
name: String
}
impl User {
fn greet(&self) {
println!("Hello, {}!", self.name);
}
}
// Rust allows you to call `greet` in two ways:
// 1. some_user.greet()
// 2. User::greet(&some_user)
The first form, some_user.greet()
, is substantially more ergonomic than the second User::greet(&some_user)
. It exposes available actions through IntelliSense, avoids additional imports, and provides a direct API at the point of use. These may seem like minor details, but good development experience depends heavily on them. Data that can express its own operations reduces friction and improves clarity.
As a junior developer this was my conception of OOP: orientation around objects, literally. Structured control flow using the objects available in a given scope.
About a year ago I was responsible for a small TypeScript backend for a CRM system. All in all it was about 300k lines of code with 4 other developers working on it. In an effort to avoid the usual pitfalls of OOP, I architected the system around JavaScript modules. Business logic was grouped by file paths; for example:
lib/
--math/
----decimal/
----geo/
------point.ts
------radian.ts
--date/
----timezone.ts
----epoch-date.ts
lib/math/geo/radian.ts
might contain:
type Radians = Brand<'Radians', number>;
function to_degrees(r: Radians): Degrees
function normalize(r: Radians): Radians
This style was simple and intuitive, and it had advantages. But the real-world ergonomics were awkward. Using a function required imports such as import {normalize} from '@lib/math'
. New developers struggled to discover what behaviors were available for Radians, which was just a branded number. IntelliSense provided no guidance, so they often combed through library files or wrote duplicate implementations. They were constrained by the file-system topology I had chosen. It didn’t scale.
Experiences like this highlight a simple truth often forgotten in critiques of OOP: objects are convenient. Many modern languages still lack a true alternative. A class-based value object would not have solved every issue, but it would have addressed the core problem of discoverability and usability.
In practice, it is not especially useful to think of objects only as state machines. Most objects exist to manage other objects. Their “behavior and data” often reduce to delegation and dependency management, which in turn rely on further delegation. The result is layered encapsulation that defines cohesive boundaries.
Because dependencies accumulate, composition becomes a central concern. Developers must avoid cycles, manage the growth of the dependency graph, and consider both ontology—how objects fit into a conceptual framework—and logistics—how they are composed in the correct order. In many systems, this produces thousand-line factory functions whose sole task is assembling dependencies into the correct shape.
Although often unwieldy, this approach to dependency management is workable, and arguably inevitable. It’s been the experience of myself and others that there are limits to the reduction of complexity any architecture can achieve, and object graphs are at least a serviceable way to contain it.
Handled through root aggregates or inversion of control containers, composition can also be powerful. Static behaviors are minimized, and most functionality lives inside a composable object hierarchy. When executed carefully, this produces a system in which everything is treated as an interface—composed, modular, and reusable. It’s very important to note that limiting behavior exposure exclusively through the lifecycles of objects is itself an explicit goal of OOP when practiced in a deliberate way.
Polymorphism is often presented as a defining feature of OOP, but I would argue its significance is overstated. The most effective object-oriented systems I have worked with minimize method overriding, avoid inheritance hierarchies, and reduce reliance on runtime polymorphism. Instead, they rely on composition with either concrete dependencies or, if absolutely needed, well-defined interfaces. Interfaces in TypeScript, protocols in Swift, and traits in Rust or PHP all provide this capability without the drawbacks of inheritance. Inheretence, when used correctly, resembles something akin to Rust’s default trait implementations.
Regardless, the risk of overusing runtime polymorphism can be seen with the following pattern:
class Player {
weapon: Weapon
armor: Armor
}
const longsword = new Longsword() // Longsword implements Weapon
const armor = new LeatherArmor() // LeatherArmor implements Armor
new Player(longsword, armor)
In designs like this, every possible variation tends to become an interface. The result is often a dependency graph filled with polymorphic abstractions, where compile-time choices are deferred into runtime concerns. As Casey Muratori points out, this tradeoff rarely pays off in practice.
Consider the example above: Longsword might not need to be a class at all. It could simply be a data structure containing properties such as hitbox, weight, and attack profile. Composition at the level of data and behavior is usually sufficient, and avoids subclass proliferation. This is the principle behind prefer composition over inheritance: capture the invariants of a concept directly, rather than scattering them across a tree of subclasses.
Overuse of abstract factories and indirection also imposes real costs. When every behavior is hidden behind an interface, the code becomes difficult to follow, and debugging turns into an exercise in tracing type erasure. There are situations that justify heavy runtime polymorphism, but they are exceptional. In most systems, the benefit does not outweigh the complexity.
But perhaps the most immediately visible advantage of OOP lies in testing. Because classes encapsulate both behavior and dependencies, they provide natural seams for substitution. Dependencies can be mocked or stubbed, and each class exposes a clear contract that can be tested in isolation. This reduces the surface area of tests and makes them easier to reason about.
In practice, this scales well in large teams. A service class that encapsulates its dependencies defines a stable API to test against. Team members do not need to know the full dependency graph; they only need to exercise the contract of the class they are working with. This aligns the boundaries of the codebase with the boundaries of the tests.
The testability of OOP is not only about isolation but also about organization. Tests can mirror the object graph. A root aggregate corresponds to integration tests, service classes to component tests, and leaf classes to unit tests. This symmetry makes test suites easier to navigate and maintain, especially in enterprise-scale systems.
There are drawbacks, of course. Excessive reliance on polymorphism or deep hierarchies can make mocking brittle, and dependency injection frameworks can produce fragile tests if overused. But the core advantage remains: OOP enforces encapsulation that naturally encourages testable units of code. In contrast, code organized primarily around free functions or global modules often requires more scaffolding to isolate dependencies or simulate state.
This convenience should not be understated. Whatever one thinks of OOP as an architectural paradigm, it continues to offer one of the most pragmatic approaches to building systems that are testable, understandable, and maintainable under real-world conditions.
A quick aside with dependencies
When we talk about dependencies - such as in a backend system - it’s worth distinguishing between what I refer to as “absolute vs arbitrary” ones. Absolute dependencies are the external resources a program cannot function without: a database connection pool, a TCP socket pool, an environment configuration, an external API client. If a route handler for GET /posts
retrieves a user’s blog posts, its true dependency is the database connection that makes the query possible.
By contrast, many of the “dependencies” in OOP codebases are artifacts of design decisions rather than fundamental requirements. A UserService
might depend on a UserPostsService
which depends on a PostRepository
, which depends on a DbAdapter
, which depends on a connection pool. Each layer is treated as a dependency because we chose to abstract the domain model in that particular way. These are arbitrary dependencies.
It isn’t a purely academic distinction. Absolute dependencies reflect the external contracts of the system, the points where our code interacts with the world. Arbitrary dependencies, on the other hand, are imposed by our architectural style. We invent them for the sake of modularity, layering, or testability. When carried too far, this proliferation of arbitrary dependencies produces heavy object graphs and deep hierarchies whose complexity has little to do with the problem domain.
What I view as the primary flaw of OOP is the way it couples behavior to data. Behaviors and data have fundamentally different lifetimes. Functions are abstractly static: parameterized and reusable across contexts. Data is ephemeral: bound to a request, a database entity, or a user session. Binding them together inside a class shifts the semantics of behavior. A pure function, or even a trivial getter, becomes a runtime concern that requires instantiating an object.
This produces a mismatch. Imagine needing to new-up a UserService
with a database connection just to call formatUserChatMessage
, a method that never touches the database. The method’s meaning is obscured by the lifecycle of the object it lives in. What should be a “free” behavior becomes entangled in a dependency graph. Of course we could create a new class abstraction called UserFormatter
, refactor formatUserChatMessage
inside of it, and inject it into UserService
. But this type of refactor is itself the problem. The very nature of OOP creates this class of problem (no pun intended) that function composition doesn’t have.
Joe Armstrong famously described this as “wanting a banana but getting the gorilla and the jungle.” The banana is the method formatUserChatMessage
. The gorilla is the class you must instantiate to access it. The jungle is the hierarchy of services, DAOs, caches, and connections you must satisfy before the object is valid. None of this reflects the semantic scope of the behavior itself.
The result is architecture that confuses dependencies of necessity with dependencies of convenience. Absolute dependencies—like a database connection—make sense as runtime concerns. Arbitrary dependencies, invented for the sake of layering, introduce artificial lifecycles. If two behaviors from separate domains need to interact, the problem becomes not only what behavior to call, but how to navigate the object graph that someone else designed. Some architects defend this as encapsulation. In practice, it is fragility disguised as order.
This fragility is cumulative. Teams inherit not only the object graph but the assumptions encoded in it. The development experience is effectively pegged to the weakest abstraction, which may be as innocous as a now-defunct assumption about the business logic. If business requirements fundamentally change, velocity collapses. If the graph is poorly managed, velocity collapses. This is the definition of brittle code. By contrast in a module-oriented design, behaviors exist independently of any pre-composed ontology. Changing requirements mean recomposing functions, not rearchitecting lifecycles of existing behaviors.
Defenders might respond that trivial behaviors need not be tied to lifecycles—that static methods, for example, offer an escape hatch. But if the escape hatch is the better fit, what was gained by binding behavior to objects in the first place? If lifecycles truly are an advantage, then singletons and static utilities are a betrayal of OOP’s core claim. If lifecycles are not an advantage, then OOP is adding cost without benefit.
Coupling state and behavior does not just complicate semantics in the present; it also amplifies side effects as code evolves. When behavior is defined inside a class, it inherits the entire scope of that class. Changes to one method can unintentionally interact with state managed elsewhere, even if the method itself appears isolated.
Polymorphism compounds this problem. A method overridden in a subclass may depend on state differently than in its parent. Adding or modifying a field in the superclass risks breaking behavior across the entire hierarchy, often in subtle ways. These are not side effects at the level of external dependencies but side effects introduced by the architecture itself.
SOLID attempts to guard against this. The Open/Closed Principle (OCP) insists that classes should be open for extension but closed for modification, while the Liskov Substitution Principle (LSP) says that subclasses should be safely swappable for their parents. In practice, these rules forbid exactly the kinds of ripple effects described above. But they leave a deeper question: what happens when behavior genuinely changes? When a new requirement cannot be expressed by extension alone, the only option is to modify shared code—and the same entanglement that OCP and LSP warn against ensures that those changes spread side effects throughout the system.
In long-lived codebases, this is not an edge case but the common case. New features alter invariants. Business rules evolve. What looks like a localized change often becomes a global one because the architecture distributes responsibility across lifecycles. By contrast, functions operating over explicit data structures expose their scope directly. A function that accepts a record and returns a new record makes its side effects explicit. When behavior is bound tightly to class hierarchies, the scope is implicit, and surprises multiply as the system grows.
Software architecture is not fixed. It evolves with languages, tools, and the problems they are applied to. As with OOP, the best development philosophies of today will look dated in a couple of decades. There is no universal truth here; “truth” is always scoped to the problem domain, and even the most fundamental problem domains change. All we can do is guide that movement toward a more durable direction.
“..and with strange aeons even death may die” - H.P. Lovecraft
One learning from writing this is that developer experience is inseparable from architecture. IDEs, editors, AI, and type systems do as much to shape how we structure code as abstract principles do. The popularity of TypeScript owes as much to VSCode and its seamless IntelliSense as to the language itself.
A possible future of DX might be a “best of both worlds” approach. We could write standalone module functions—like the radians.ts
example—while editors surface them with the same discoverability as object methods. In other words, direction.to_degrees()
and to_degrees(direction)
would be semantically identical, but the former would provide the developer ergonomics we expect from OOP. The data exposes its own API, without binding that API to a lifecycle.
The same principle could extend to absolute dependencies. Imagine an environment where real dependencies—database clients, caches, message queues—are first-class citizens in the editor. Instead of chasing through factories or DI containers, IntelliSense could surface the available operations directly. This would invert the usual burden: instead of developers conforming to an arbitrary object graph, the tooling would expose the available capabilities, but in a modular and composable way.
The general trajectory is clear. Better tooling, lighter abstractions, and a focus on absolute rather than arbitrary dependencies could reduce the brittleness that OOP often creates while preserving the ergonomic strengths that made it popular in the first place.
If we think of our domain—the things we organize code around—as our actual dependencies, we gain a clearer organizing principle. Put simply: if every behavior of a class relies on the same external resources, the banana–gorilla–jungle problem disappears. And this should not be surprising: it is exactly what we mean when we use the word dependency.
In other words, the more dependency overlap class behaviors have with one another, the better the class is. It might even be useful to think primarily as groupings of behaviors by dependency usage.
But it should be said creating wrappers over real dependencies is both natural and useful. It is reasonable to abstract a database connection pool behind a repository or a client library; handing raw connections to a junior developer writing GET /users/{id}
would not be responsible. But those abstractions should remain close to the resource itself. Without the underlying dependency—database, message queue, network socket—there is no backend at all.
The problem arises when abstraction drifts upward into layers that exist primarily to manage other abstractions. The proliferation of “managers,” “services,”, “clients”, and “controllers” often creates interfaces that manage little beyond the delegation of calls. By contrast, abstractions tied to concrete state are usually justifiable. A GraphicsController
in an HTML5 renderer is meaningful because it directly manages the lifecycle of renderable objects. A UserService
is less convincing; in many cases it is a wrapper around createUser and similar methods, functioning mainly as a delegator rather than an essential contract.
The guiding principle is simple: abstractions are strongest when they organize around absolute dependencies or concrete state. The further they drift into managing other abstractions, the weaker and more arbitrary they become.
The worst abstractions are the ones that extract cost without providing value. In backend systems, this often takes the form of controllers or service layers that exist primarily to enforce organizational rules.
For example, a route handler does not need to be a controller method; it can just as easily be a composed function whose dependencies are explicit. Baking it into a controller method creates implicit assumptions about middleware, or more abstractly, behavior context. Central enforcement of behavior is rarely worth the indirection it creates. Instead of hiding route handlers inside the jungle of classes and aggregates, let them declare the behaviors and dependencies they actually need. A small DI interface is enough:
// Things which are "applied" to the route such as middleware are simply composed from higher order functions
const getUserRoute = withValidation(schema, withTokenAuth(getUser))
// Absolute dependencies are injected by a route function and resolved at runtime
export default route('GET', 'users/{id}', getUserRoute)
.with('user-dao')
.with('cache')
There is still a dependency resolver here, but it’s composed rather than layered, and it does not leak context. For all intents and purposes you could treat our route as a pure function. The important point is that dependencies are explicit, easy to trace, and behaviors are not coupled through unnecessary abstractions.
The contrast becomes clear in practice. While recently working in an OOP codebase, I needed to implement an “optional auth” strategy: if a token was present, authenticate; if not, continue. But because authentication middleware was buried deep in the object graph, it could not be changed in isolation. Supporting the new behavior required edits across the entire authentication abstraction. A local variant became a system-wide rewrite.
This is what rent-seeking abstractions do: they insert themselves to instill DRY, but couple the abstraction to the entire system.
Architects often decry “god objects” while at the same time celebrating root aggregates or inversion-of-control containers. The difference is usually framed as one of responsibility: a god object performs too much itself, while a root aggregate delegates to other classes. But in practice the principle is the same. Whether an object executes behavior directly or coordinates through delegation, the result is a single interface that embodies the entire application’s API. That is a god object in everything but name.
Why do we tolerate root aggregates but not god objects? The answer is convenience. A single entry point makes a system easy to navigate and gives the impression of cohesion. But this convenience comes at a cost: all behavior is now mediated through one central abstraction. Dependencies become hidden behind layers of delegation, and every change to the aggregate risks subtle ripple effects.
The alternative is once again composition. Rather than enforcing a single interface, let behaviors compose directly with the dependencies they need. With our route handler example above, you can declare dependencies locally rather than giving the route a global service root. This keeps APIs small and bounded, and it ensures that no single object grows into the bottleneck of the system.
Root aggregates may appear safer than god objects, but both reflect the same architectural instinct: to centralize authority in one place. The more we can resist that instinct, the more resilient our systems become.
Architecture only makes sense in the context of project scope. A game engine should not be designed like a small business ecommerce site, which in turn should not be designed like an enterprise mobile app or a CLI tool. This is obvious, but it is rarely acted on.
The useful lens here is to think in magnitudes of complexity. At low complexity, the priority is clarity: declarative code that is easy to debug, test, and refactor. Most “enterprise CRUD” applications fall here. They do not need deep hierarchies or elaborate polymorphism; they need clear control flow.
At medium complexity, modularity begins to matter. Concurrency, scaling, and separation of concerns justify abstraction, but only when it directly addresses the system’s absolute dependencies. The architecture should grow in proportion to the complexity, not in anticipation of it. If behavior utilizes composition well, growing from low to medium complexity is a natural process.
At high complexity, such as with game engines, distributed systems, and compilers, different paradigms emerge. These systems deal with concurrency, memory, and state lifecycles at a scale that makes lightweight abstractions insufficient. But this is the exception, not the rule.
The problem is that many codebases adopt “high-magnitude” architectural patterns at “low-magnitude” scales. Layers of services, controllers, and factories proliferate in applications that would be better served by straightforward declarative composition. The result is what most developers recognize as enterprise crudslop - over-engineered and under-productive. Is there any developer who has worked on enterprise systems who hasn’t wasted hundreds of man-hours on over-abstracted systems?
Languages like Rust are not immune to the challenges faced when coupling behavior and lifecycles I described earlier. Function implementations on a structure can easily face the same issue. But they at least expose the coupling more explicitly. Attaching a function requires annotating ownership and mutability. A function signature that consumes a mutable borrow makes the behavior’s scope visible. In OOP, the lifecycle is implicit, and developers discover the jungle only after they’ve tried to pick the banana.
I think there’s a lot of room in programming languages to provide annotations and metadata for methods, similar to what Rust does with receiver types. Ad-hoc traits which partial apply dependencies could allow for better separation of concerns. For example, something like:
struct Player {
id: String,
first_name: String,
last_name: String,
health: u64
damage: u64
}
// Informal named trait which constrains Player only by the data it uses
impl "DisplayUI" Player<first_name, last_name> {
fn display_name(&self) -> String {
// Can only access data in first_name, last_name
format!("{} {}", self.first_name, self.last_name)
}
}
This would have its own sort of complexity, but I think it could bridge the gap between formal trait interface which require more abstraction and indirection and some of the problems I outlined above.
Concievably you could do something like:
let lobbyPlayer = Player<"DisplayUI"> {
first_name: "John",
last_name: "Smith"
}
println(lobbyPlayer.display_name())
Perhaps semantics like this could help us naturally determine interface boundaries as well.
(Of course this is just a random example which probably has a lot of downsides, but hopefully the path is clear.)
Most of this is an exercise is “what could be” rather than a direct criticism of The Current Thing, which today still happens to be OOP. Despite very powerful and ergonomic languages, and AI aside, I think there’s still a lot of room for improvement in DX, but it involves challenging our assumptions about software architecture instead of incremental feature adoption.