Before the mid-1970s, every ward nurse in most hospitals mixed their own intravenous drugs. Each nurse was individually responsible for measuring, diluting, checking concentrations, and labelling — making independent judgments about a class of operations that is genuinely dangerous when done wrong. Overdoses, contamination, incorrect diluents. Some errors were fatal.
The change wasn't to train nurses better or write a protocol. It was to move the operation. Centralised pharmacy units took over IV drug preparation entirely. Specialists whose entire job was this one class of risky work now did it in purpose-built clean rooms with dedicated equipment and double-checking protocols. Ward nurses no longer prepared IV drugs. They administered them.
This is a different idea from standardisation. Standardisation says: everyone does the risky thing the same way. Centralisation says: one place does the risky thing, and everyone else delegates to that place.
The difference matters because of what it means for expertise. With standardisation, every nurse still needs to understand drug preparation well enough to follow the protocol and catch gaps. With centralisation, the ward nurse doesn't need that expertise at all — she needs to know how to order from pharmacy. Drug preparation becomes the full-time specialism of people who encounter every edge case. Everywhere else, the question changes from "did we do it right?" to "did we use pharmacy?"
Centralisation of risk exposure permits specialisation of risk management.
The same pattern in organisations and operations
Every bank has a centralised compliance team that reviews transactions for suspicious activity. Before this, branch managers made independent AML judgments — five hundred branches, five hundred invisible decision points. Each manager encountered the decision rarely, without the pattern recognition that comes from volume. The compliance team centralises that judgment: analysts see every suspicious transaction across the institution, and an auditor reviewing the bank's AML posture goes to one team, one process, one set of records.
Bomb disposal follows the same logic. Officers who find a suspicious package, or infantry who find an IED, establish a perimeter and call the specialists. They get basic awareness training — enough to recognise the situation and know what not to do — but handling ordnance is not their job. It's an event they may encounter once in a career. For the bomb squad, it's Tuesday.
The Toyota Andon cord centralises a different kind of risk: the risk of a defect shipping. Any worker who spots a problem pulls the cord; the line stops; a team leader arrives. Before the cord, halting production was an act of individual social courage — every worker decided privately whether their concern justified it. The cord moved that decision into one visible, auditable mechanism.
Amazon's single-threaded ownership principle does the same for accountability. Every product or service has exactly one owner. Accountability distributed across a committee is accountability nobody holds; spread thin enough, it disappears.
Four things that get cheaper
When a risky operation moves from scattered to centralised, four things change:
- Auditing — a reviewer assesses the risk class by looking in one place, not searching thirty sites for every instance.
- Testing — one canonical test suite applies to every caller. Proving that one place works proves it everywhere.
- Fixing — a bug fixed in the central implementation is fixed for every caller simultaneously. A bug in one of thirty scattered implementations still requires finding and patching the other twenty-nine.
- Visibility — the absence of the centralised call becomes the signal. When an organisation routes all risky operations through one mechanism, the anomaly becomes visible: why is this one instance not using it?
Isolate the risk, contain the overhead.
Drug preparation error becomes "did it come from pharmacy?" Line defect becomes "did anyone pull the cord?" The question shifts from "did everyone get this right, individually, each time?" to "is the centralised mechanism in place and working?"
The failure mode worth taking seriously
Centralisation has one serious failure mode: a broken centralised gate that nobody notices. Distributed risk produces bugs that appear per-instance, slowly, visibly. A broken centralised implementation means every caller is broken simultaneously, silently — because the gate exists, so nobody checks behind it.
I've seen an automated security scan that quietly checked zero files for an unknown period. It still ran, still reported success, no alerts fired. Having only one place to look doesn't help if your eyes are closed.
Centralisation doesn't demand more testing — but it makes testing pay off. The same effort that would have been spread thinly across thirty call sites can be focused on one implementation, where the ROI is dramatically higher. After centralisation, you test one thing well, and every caller benefits. Centralisation without test coverage rots — but centralisation is what makes real test coverage achievable in the first place.
When to centralise, when not to
The second occurrence of a risky pattern is the signal. One instance might be a one-off. Two means the pattern is real. Three means the bug class is in more places than you've found.
The pattern has to be genuinely shared. Centralising something whose correct behaviour varies meaningfully across callers produces a mechanism with fifteen configuration knobs — usually worse than the scattered implementations it replaced. If the risk manifests differently in each context, forced centralisation is artificial. Leave legitimately divergent logic scattered.
The centralised path needs to be ergonomic. If delegating to the specialist costs meaningfully more effort than doing the risky thing yourself, people will route around it. The interface to the centralised mechanism has to be low-friction — the same principle that makes interface design matter more than implementation quality: the interface is the part other humans have to live with.
For the software reader
Not a programmer? Skip to the takeaway.
PostgreSQL's parameterised queries are a clean example. Before them, every developer had to remember to escape user input at every interpolation site — SQL injection risk distributed across every query in every codebase. Parameterised queries relocated that risk to the query engine. An unsafe query now requires explicitly bypassing the standard path.
AWS SDK request signing follows the same pattern. Signature Version 4 is a multi-step HMAC process — and before the SDK abstracted it, teams hand-rolled it badly. The SDK centralised signing into one implementation: application code calls the client, the client signs. Signing became an implementation detail invisible to most users — while the SDK still exposes levers to power-users who need to optimise or debug at that layer. A good centralised interface contains the risk without hiding every knob.
Centralising a risk class almost always involves creating an interface that callers use instead of implementing the risky thing themselves. Behind the interface, the risk becomes an implementation detail. And because the complexity is hidden, it can evolve independently — validation, edge-case handling, and testing can be added or overhauled without touching any call site. In the early stages of a project, the testing can even be absent, then added later. Try doing that when the risk is scattered across thirty files.
There's a relationship with interface stability here too. When the interface to the centralised risk point is clean — one call, one meaning — callers use it consistently and the central implementation can evolve. When the interface is awkward, engineers route around it — and the centralisation quietly unravels.
Findable risk
The hospital pharmacy didn't make nurses better at drug preparation. It made drug preparation not a thing nurses do — and put it in the hands of people for whom it is the whole job. The police bomb squad didn't give every patrol officer advanced ordnance training. It gave them a number to call.
When you find a risky pattern appearing in multiple places, the instinct is often to add more policy: training, checklists, documentation. But more policy applied broadly means more overhead for everyone — most of whom lack the context to apply it well. The stronger move is the opposite: give everyone a simple, low-overhead rule ("call pharmacy", "call the bomb squad", "use the SDK") and centralise the complex, high-overhead policy to a small specialist unit that actually understands it. Less burden on the many, more depth from the few. The callers don't need to understand how the risk is managed — they just need to know the interface.
When centralisation becomes part of the architecture or the culture, there is a compounding effect: the safe path becomes not just correct but obvious and easy. New team members, new code, and new processes all default to the centralised mechanism because it is the path of least resistance. Over time, doing the risky thing correctly requires less effort than doing it dangerously — which is precisely when you stop having to remind people.
Risk you can find is risk you can manage. Risk you can't find is risk that's managing you.