"Keep it simple" has always sounded like good advice and felt like a platitude. Easy to say in a conference talk. Hard to defend when the CTO at the last company used Kafka, when the job postings all list Kafka, Kubernetes, Terraform, 3 obscure databases and Apache Spark as a minimum, and when the architecture diagrams in blog posts look like circuit boards.

But something has changed. The arguments for simpler technology are converging from directions that have nothing to do with each other. Flexibility is one. Hiring markets are another. AI tooling is a third. Scaling economics is a fourth. Four unrelated forces, each independently arriving at the same conclusion.

When that happens, it stops being a preference and starts being engineering sense.

The architectural practices that make a system simple also make it portable, hireable, AI-legible, and scalable. Not by accident; because good abstractions are inherently simple, and simplicity compounds.


You Can Deploy It More Places

In March 2026, Iranian drone strikes hit datacentres in the UAE. Not a hypothetical scenario from a risk assessment. Real infrastructure, real outages, real data at risk. If your application was locked to a single cloud provider, or a single region, you had a problem that no amount of auto-scaling could solve.

Geopolitics is the dramatic version of a common risk. The quieter versions are just as real:

  • Vendor pricing shifts. AWS reprices continuously. What was cost-effective last year may not be next year, and by the time you notice, you are already locked in.
  • Service deprecations. GCP has a well-documented habit of killing products. If your architecture depends on a managed service that gets sunset, your migration timeline is not yours to choose.
  • Data sovereignty regulations. GDPR was the start, not the end. Jurisdictions are increasingly specific about where data can live and who can access it, and even the nationality of the company that hosts it. A single-region or single-cloud deployment may become a compliance liability overnight.
  • Vendor lock-in. Cloud vendors are strongly incentivised to keep you in their orbit, even when it does not benefit you. Proprietary APIs, managed services with no open-source equivalent, pricing models that penalise egress: these are not accidents. They are business strategy.

Each of these is a reason your application might need to move. The question is whether it can.

Here is a simple test: if the core of your application cannot run on your laptop, something is wrong with your abstractions. Managed services trade portability for a type of convenience that may be illusory. Most teams never consciously make that trade. They discover it when the bill arrives or the vendor changes the terms.

A clean data layer means storage is swappable. The provider pattern means every external dependency is behind an interface. A strategic monolith with standard databases runs on any cloud, any VM, any laptop. Portability is not a feature you build. It is a side effect of not coupling yourself to a vendor's proprietary abstractions.


You Can Hire For It

Every technology you add to your required stack geometrically shrinks the candidate pool.

"Python and Postgres" reaches thousands of engineers. Add Kafka: you have cut the pool. Add Kubernetes expertise: cut again. Add Terraform, a specific cloud provider, a service mesh, a particular streaming framework: each requirement halves (or worse) what remains. By the time your job posting reads like a product catalogue, you are fishing in a pond that barely exists.

Hiring is already the hardest constraint most startups face. Your architecture should not make it harder.

The engineer who joins and is productive in a week is worth more than the one who needs three months to understand the tooling before they can touch the domain. This is not a statement about talent. It is a statement about leverage. The faster someone can contribute to the thing that matters (the product), the more value they create. Every hour spent learning broker configuration or cloud-specific deployment tooling is an hour not spent on the business problem.

The compound cost goes beyond the hiring funnel. It is onboarding time. It is the blast radius of mistakes in unfamiliar tooling. It is the bus factor when only two people understand the message broker configuration. It is the salary premium for specialists in technologies you may not have needed in the first place.

The Spark That Wasn't Needed

At a previous fintech, we had a record linking problem: matching and deduplicating entities across large datasets. The data science team had built the initial solution as scripts running against static dataset exports, using pandas. It worked, but everyone was worried about scale.

The decision was made to migrate to Apache Spark. The reasoning felt sound: the datasets were large, Spark was built for large datasets, and the team was already familiar with the Python data ecosystem. The migration took months.

Then a requirement arrived that, in retrospect, should have been obvious from the start: the system needed to link records on the fly, as a streaming process, not as periodic batch runs against a static export. Spark can do streaming, but the record linking logic had been built around bounded datasets. The compromise was to run the system "in batches" against bounded subsets of the live database, an awkward hybrid that was neither truly streaming nor cleanly batch.

The result was a system that was genuinely hard to reason about. Spark's execution model, its cluster management, its failure modes (especially the Python to JVM bridge): these are non-trivial. New engineers took months to become productive. Hiring was heavily constrained, a narrow pool that got much narrower when you added the domain expertise the role also required. Debugging meant reasoning about distributed execution plans rather than application logic.

The engineering cost, conservatively, ran to low millions in wasted effort over the life of the system.

The lesson was not that Spark is a bad technology. It is excellent at what it was designed for. The lesson was that when the requirement changed from batch to streaming, the team should have stepped back and reconsidered the architecture rather than forcing the new requirement into the existing stack. A simpler approach, built around the actual access pattern, would have been faster to build, easier to hire for, and cheaper to maintain. The organisational retrospective that might have caught this never happened.

The Series Answer

The practices in this series deliberately minimise the technology surface. A relational database. A well-structured monolith. A clean data layer. Standard language features for signals and state management. The stack a new hire already knows.


AI Can Work With It

This is the newest axis, and it points the same way as the others.

Context is finite, for humans and for AI agents. A simple codebase with clear abstractions, explicit state, and a single place to look for orchestration is one an AI agent can navigate, modify, and test. A sprawl of infrastructure-as-code, broker configurations, and cross-service coordination is one it will hallucinate about.

OpenAI's engineering team recently (at the time of writing) published their experience building a product with AI agents writing all the code. In Harness Engineering, they report that "technologies often described as 'boring' tend to be easier for agents to model due to composability, API stability, and representation in the training set". They found that strict architectural boundaries with clear layering were not a nice-to-have but "an early prerequisite" for agent-driven development. Rigid structure enables speed. Ambiguity kills it.

This maps directly onto the practices in this series. Clean data layers, explicit state machines, injected dependencies, enforced module boundaries: these are not just good engineering for humans. They are what makes a codebase legible to an AI agent. Complexity wastes context. Every layer of indirection, every infrastructure concern baked into application code, every implicit dependency is a token spent explaining the tooling rather than the domain.

This axis did not exist two years ago. It will matter more every year.


Scalability Is in Your Data Layer

Teams adopt complex distributed architectures because they are afraid of not scaling. The fear is generally premature, and can be easily mitigated with forward-thinking, simple design.

Scalability is concurrency, and a monolith can be highly concurrent. Worker pools, async frameworks, read replicas, connection pooling: none of this requires spreading your state across services. A single well-structured application with a clean data layer can serve far more traffic than most products will ever see.

When a specific data touchpoint outgrows its current backend, the data layer absorbs the change. The architecture does not need to. The application code does not change. A new provider implementation, a configuration change at startup, and the system is running on a different backend. This is what scalability actually looks like for the vast majority of real-world products: not a rewrite, but a swap behind an interface.

The real risk is not under-scaling. It is over-engineering so aggressively that you never ship, or ship so slowly that scale becomes irrelevant because nobody uses the product. The startup that ships fast on boring technology will reach scale problems before the startup that spent its first year building infrastructure for traffic it does not yet have. And when those problems arrive, a clean architecture gives you the tools to address them precisely, one touchpoint at a time.


Summary

Four independent axes. One answer.

Simpler technology deploys to more places. It opens the hiring pool instead of narrowing it. It gives AI agents something they can actually work with. And it scales through the data layer, not through architectural complexity.

Not because simplicity is an ideology. Because good abstractions happen to be simple, portable, hireable, AI-legible, and scalable enough.

That is what the software chef knife techniques enable.