Legacy Modernisation in Regulated Industries
Modernising a core banking system is not like refactoring a startup. Here is a practical framework for legacy modernisation in regulated financial
Every financial institution we have worked with carries the same invisible weight. Somewhere in the stack — beneath the mobile app, beneath the API gateway, beneath the middleware that someone wrote in 2007 — there is a core system that runs on software that predates smartphones. It processes real money, real transactions, and real customer obligations, every day, without fail. And nobody wants to touch it.
This is the legacy modernisation dilemma, and it is not a technology problem. It is an institutional one. The systems that power regulated financial institutions were built to last, and they have. But the decades of patches, workarounds, and bolt-on integrations have compounded into technical debt that makes every new capability slower and more expensive to deliver. The stakes are different from any other software modernisation context: regulatory penalties, customer data at risk, and uptime requirements measured in nines that most engineering teams never have to think about. “Move fast and break things” is not a viable philosophy when breaking things means a payment system goes down during Hari Raya weekend.
The question is not whether to modernise. It is how to do it without breaking the trust — regulatory, customer, and operational — that the institution has spent decades building.
Why Regulated Legacy Is Different
The first thing we tell clients who come to us with a legacy modernisation brief is to set aside most of what they have read about microservices migration. The playbooks written for e-commerce platforms and SaaS companies do not transfer cleanly to core banking.
Core banking systems (CBS) in Malaysia, Singapore, and across Southeast Asia frequently run on COBOL or proprietary middleware platforms from vendors like Temenos, Finacle, or Flexcube — versions that are sometimes a decade or more behind their current releases. These are not failing systems. They process millions of transactions accurately, every day. The problem is not that they don’t work. It is that they are sealed environments: opaque, monolithic, and extraordinarily difficult to extend without risk.
Data residency and audit trail requirements add a layer of constraint that lift-and-shift cloud migration cannot satisfy. Bank Negara Malaysia’s Risk Management in Technology (RMiT) Policy Document, effective 2020, requires financial institutions to maintain clear data governance, ensure system resilience, and notify BNM of material technology changes. The Monetary Authority of Singapore’s Technology Risk Management Guidelines impose similar obligations. You cannot migrate a core banking database to a public cloud provider without working through data classification, sovereignty requirements, and a formal risk assessment — and none of that happens in a sprint.
Change management in regulated environments also operates differently. Regulators require documented evidence of every material system change: what changed, when, who authorised it, and what the rollback plan was. The “strangler fig” pattern — the standard architectural approach for incrementally replacing a legacy system — must be adapted when the vine itself is subject to audit. Every new service you introduce alongside the legacy system is a new surface area that regulators will want to understand.
The Three-Phase Decomposition Framework
We have settled on a three-phase approach that respects the constraints of regulated environments while still making meaningful progress. It is not original — it draws on well-established patterns from domain-driven design and evolutionary architecture — but the sequencing and the compliance considerations are what we have refined through experience.
Phase 1: Observe
The single most common mistake in legacy modernisation programmes is skipping straight to design. Before we write a single line of new code, we spend weeks — sometimes months — observing the existing system.
This means instrumenting the legacy system with monitoring agents and log aggregators to map actual data flows, not the flows that exist in the (often outdated) documentation. It means identifying bounded contexts: the functional areas that have strong internal cohesion and loose coupling to adjacent areas. In a core banking system, these typically include accounts, payments, lending, reporting, notifications, and user management — but the actual boundaries are always messier than the theory.
Observation also means identifying every downstream consumer of the legacy system: the batch jobs that run at midnight, the reporting extracts that feed the finance team’s spreadsheets, the third-party integrations with credit bureaus and payment networks. These are the integration surfaces that kill modernisation programmes at UAT.
We do not make changes to the legacy system in this phase. We only watch.
Phase 2: Extract
Once we have a reliable map of the system’s behaviour, we begin extracting peripheral services — deliberately starting with the lowest-risk functional areas.
Notifications, reporting, and user management are typically good early candidates. They are real business functions, they have clear boundaries, and failures in them are recoverable without financial consequence. We build each extracted service behind an anti-corruption layer (ACL) — an adapter that translates between the legacy system’s data model and the new service’s domain model. The ACL is critical: it prevents the old system’s conceptual model from contaminating the new architecture.
Legacy CBS
│
├── ACL (Anti-Corruption Layer)
│ │
│ ├── Notification Service (new)
│ ├── Reporting Service (new)
│ └── User Management Service (new)
│
└── Core Transaction Processing (untouched)
During extraction, the legacy system remains the source of truth. The new services read from it (via the ACL) or receive events from it, but do not write back independently. This keeps the data consistency model simple while the new services are being proved out.
Core transaction processing — the actual ledger, payment processing, and settlement — is not touched in this phase. The risk profile is simply too high, and we have not yet built enough confidence in our understanding of the system to attempt it.
Phase 3: Replace
By the time we reach Phase 3, we have a working set of extracted peripheral services, a reliable data flow map, and months of operational experience running the hybrid architecture. Only now do we begin the incremental replacement of the core.
The technique we rely on is traffic shadowing: the new core system runs in parallel with the legacy, receiving the same inputs and producing outputs that are continuously reconciled against the legacy outputs. We do not cut over until reconciliation tests pass consistently and confidence thresholds — defined in advance with the business and the risk team — are met.
This parallel-running period is expensive. It requires maintaining two complete systems simultaneously, with all the operational overhead that implies. But in a regulated environment, it is non-negotiable. Regulators expect you to be able to demonstrate that the new system produces correct outputs before you rely on it.
Compliance-First Engineering Decisions
Several engineering decisions must be made with compliance requirements as the primary constraint, not an afterthought.
Immutable audit logs. Every state change in a financial system must be captured in a way that is tamper-evident and queryable for regulatory review. We use append-only event stores with cryptographic hashing (each log entry includes the hash of the previous entry, making retroactive modification detectable) rather than mutable database tables. This is not gold-plating — it is the minimum standard for a regulated financial system.
Dual-running reconciliation. During the transition period, both the legacy and new systems must produce identical outputs for the same inputs. We build automated reconciliation test suites that run continuously in the parallel-running environment and alert on any discrepancy. A 0.01% discrepancy in transaction amounts sounds small. At the scale of a financial institution, it is not.
Regulatory notification under RMiT. BNM’s RMiT framework requires notification of material technology changes. “Material” is defined broadly — system replacements, significant infrastructure changes, and new third-party service providers all typically qualify. We include regulatory notification milestones in the project plan from day one, because discovering this requirement at the end of a phase is a programme-stopping event.
Data classification and lifecycle. PII, financial transaction records, and audit logs have different retention periods, encryption requirements, and access control policies under Malaysian data protection law (PDPA), BNM guidelines, and internal data governance frameworks. A modernisation programme that migrates data without first classifying it is creating new compliance risk while trying to reduce old technical risk.
Shift-left compliance. We embed automated compliance checks in the CI/CD pipeline: data classification validators, encryption configuration checks, and access control policy tests that run on every pull request. Catching a misconfigured S3 bucket policy in code review is several orders of magnitude cheaper than catching it in a regulatory audit.
Common Failure Modes
We have seen enough legacy modernisation programmes fail to be honest about what goes wrong.
Big bang migrations remain the most common failure mode despite being the most well-documented one. An institution decides to rewrite the core banking system over 18 months, puts it through UAT, and discovers that the new system cannot handle the edge cases that accumulated in the legacy system over twenty years of production use. The programme is cancelled, the legacy system gets a new lease on life, and everyone is exhausted.
Underestimating the integration surface is the second most common. Teams focus on migrating the core system and discover — too late — that there are forty downstream consumers they did not know about: Excel-based reports generated by batch jobs, third-party systems that depend on undocumented API behaviours, and manual processes built around quirks of the old system. The observation phase exists precisely to surface this.
Skipping straight to rewriting is usually a consequence of organisational pressure. The observation phase feels slow and produces no visible output. Stakeholders push for visible progress. We resist this because we have seen the alternative: teams that start building before they understand the system spend the next year discovering requirements they could have mapped in the first three months.
Team structure is underestimated as a programme risk. A legacy modernisation programme needs a dedicated “bridge team” that carries accountability for both the old and new systems throughout the transition. Without this, neither system gets adequate attention: the legacy operations team focuses on keeping production running, the new development team focuses on building features, and nobody owns the integration seam between them.
Discipline Is the Differentiator
Legacy modernisation in regulated industries is not, at its core, a technology problem. The technology is tractable — the patterns are well understood, the tooling exists, and the architectural choices are finite. What makes these programmes succeed or fail is the discipline to follow the framework: to observe before building, to extract incrementally rather than rewrite wholesale, to treat compliance as a first-class engineering constraint rather than a sign-off step at the end.
We have seen institutions with sophisticated technology teams fail at this because they skipped phases under schedule pressure. We have seen institutions with relatively modest engineering capability succeed because they were methodical. The framework matters less than the discipline to follow it.
Related Reading
- Legacy Modernisation vs. Full Rewrite: How to Decide — The decision framework: when to modernise incrementally vs. when a rewrite might be warranted.
- AI in Financial Services: Moving from Pilot to Production — How to layer AI capabilities on top of a modernised financial services architecture.
- Case study: Legacy ERP Cloud Migration — How Nematix delivered an enterprise ERP migration for a regulated industry client.
See how Nematix delivered a legacy ERP cloud migration for an enterprise client — and what the programme actually looked like end to end.