In my previous post, I wrote about building self-aware teams — teams that have context, vision, and the ability to see the bigger picture. They understand the trade-offs when taking shortcuts and make those calls deliberately, not by accident.
“Thinking in systems” is the same principle, applied to the systems we build.
What System Thinking Means to Me
For me, system thinking means building with the whole picture in mind, not just the task in front of me.
I don’t just aim to “complete” something; I aim to create a system — one with a solid, reliable core that doesn’t break under pressure. Around that core, it should be extensible so it can grow, configurable so it can adapt, and resilient enough to handle edge cases without special hacks.
It’s about designing in a way where today’s solution doesn’t limit tomorrow’s possibilities. A well-thought-out system makes scaling, maintenance, and evolution natural — not painful.
An Example from My Work
In my early years at Probo, I was tasked with catching unfair usage patterns — for example, users running multiple cloned apps to place trades. Some of these behaviours qualified as fraud, and my job was to prevent them and block those users.
On paper, it was a simple task: detect the pattern → block the user. But Probo is building something unique, and the fraud patterns were still evolving. If I had only solved for the few cases we knew about, the system would have failed the moment new patterns appeared.
So my approach was to build a system — one that was extensible, configurable, and observable, without directly depending on me for every new case. I implemented a rule engine combined with our in-house tagging system.
The workflow was simple:
A user journey event — like signup or login — was sent to the fraud rule engine.
The user, along with their metadata, was evaluated against a set of rules.
If a violation was found, the user was tagged under a specific fraud category.
Each fraud tag carried its own set of restrictions, which could be applied instantly.
This meant that as fraud behaviours changed — and they did — we could respond quickly without rewriting the entire detection logic. The system became a foundation we could keep improving, instead of a one-time patch.
The Cost of Task-First Thinking
I’ve seen this countless times — a feature gets shipped, works fine for a while, and then has to be completely re-written when new use cases emerge.
The reason is usually the same: we focused on getting the task done, not on how it fits into the bigger system. The design was fine for today, but brittle for tomorrow.
This doesn’t just cause rewrites — it also takes a hit on scalability. A solution that seems lightweight at low usage can turn into a bottleneck as traffic grows.
In fact, when we first built the fraud rule engine at Probo, daily active users were relatively low. At that time, it could have been tempting to run it as a synchronous system in the user flow. But had we done that, today’s scale would have meant massive slowdowns in normal operations. By designing it as an event-driven, decoupled system, we avoided a scalability wall that would have crippled us later.
The task-first approach delivers speed in the moment — but it often leaves behind hidden costs that surface only when scale and complexity catch up.
System Thinking ≠ Over-Engineering
Thinking in systems doesn’t mean building something that will last untouched for the next 10 years. In a fast-paced startup, velocity is critical. You can’t freeze progress in the name of perfect architecture.
What it does mean is judging wisely the level of configuration and flexibility a solution really needs. Sometimes spending 2–3 extra days on a task today can save you weeks of painful rewrites a few months later.
That’s an underappreciated skill — knowing when to invest that extra effort, and when to deliberately choose the quick path with eyes wide open.
A Time I Over-Engineered
I learned this lesson the hard way.
Once, we were experimenting with ways to create delight for users when they won a trade. For example, if someone won more than a certain amount, we wanted to show a celebratory meme. The idea was simple: a small surprise that might encourage virality.
If I had built this in the most direct way, it could have been done in 2–3 days. Instead, I spent almost two weeks building a templatization engine that could handle multiple scenarios — based on winning amount, number of trades, leaderboard rank, and more.
The reality? The experiment didn’t pan out. We closed it without ever needing those additional templates. The two weeks of careful system-building had little practical value.
That experience reminded me: system thinking is valuable, but over-engineering is still a risk. The real skill is not just in designing flexible systems, but in knowing when to keep it simple and wait for validation.
Shortcuts Aren’t Evil — Blind Shortcuts Are
Just like self-aware teams can take a shortcut consciously when the trade-off is worth it, systems thinking accepts that sometimes you must optimise for the immediate need.
The difference is:
Without systems thinking: “This works for now — ship it.”
With systems thinking: “This works for now, but here’s the cost, and here’s how we’ll address it later.”
That awareness is what keeps future you from cursing past you.
Why It Matters
Software issues often don’t come from a single bad decision. More commonly, they build up over time as a series of small, local choices that don’t work well together.
Thinking in systems helps reduce this risk. It encourages designing with future growth and change in mind, so today’s solution doesn’t become tomorrow’s bottleneck.
By approaching problems this way, you’re not just addressing the immediate need — you’re making it easier and more efficient to handle what comes next.