Generative Ai
GenAI for Regulatory Reporting: Banks and Insurers
Apr 01, 2025

GenAI for Regulatory Reporting: Banks and Insurers

BNM submissions, SC filings, Basel III narratives are largely manual. GenAI can cut regulatory narrative drafting by 50-70% without shifting accountability


A regulatory reporting team at a Malaysian bank produces dozens of submissions per month. Monthly BNM statistical returns. JKDM customs and trade finance reports. Quarterly Basel III capital adequacy reports, with the Pillar 3 disclosure narrative. Internal audit responses. SC filings for capital markets activities. Incident reports. Regulatory query responses.

The data compilation for these submissions — pulling the numbers from core banking systems, treasury platforms, and risk engines — is largely automated or semi-automated. What remains stubbornly manual is the narrative: the commentary explaining the figures, the contextual analysis required by the submission template, the response to a regulatory question that requires synthesising data from three systems into a coherent paragraph.

This narrative drafting is exactly the task that LLMs are well-suited for. Structured data in; coherent, template-consistent text out. Compliance teams that have deployed GenAI in this workflow report 50 to 70% reductions in time spent on narrative drafting. That is a significant efficiency gain in a function where staff time is scarce and the work is genuinely arduous.

Getting to that outcome requires understanding precisely where the LLM belongs in the workflow and where it does not.

The Regulatory Reporting Burden

To understand the opportunity, it helps to be precise about what regulatory reporting actually involves.

A mid-sized Malaysian commercial bank with an investment banking arm and insurance subsidiary typically maintains a reporting calendar with 40 to 60 distinct submission types per year, many with monthly or quarterly frequency. The Basel III capital report alone — covering Capital Adequacy Ratio, Tier 1 and Tier 2 capital components, Risk-Weighted Assets, Leverage Ratio, and Liquidity Coverage Ratio — requires both quantitative tables and narrative commentary explaining movements, material changes, and management’s assessment of the institution’s capital position. The narrative sections can run to several thousand words for a complex reporting period.

The compliance and reporting teams handling this work are typically small relative to the volume. A reporting team of six to ten people managing 40 to 60 submission types means each team member carries a significant portfolio. During peak reporting periods — end of quarter, annual report preparation, simultaneous BNM and SC submission deadlines — the team is at capacity or beyond it.

The cost of errors in this context is not simply remediation effort. Regulatory submissions with errors or inconsistencies create examination risk, relationship risk with the regulator, and — in serious cases — enforcement exposure. The stakes create pressure for thoroughness that is difficult to sustain at the volumes and pace required.

Where GenAI Adds Value

The workflow contribution of GenAI in regulatory reporting is confined to a specific step: the generation of first-draft narrative from structured data inputs. Understanding this precisely matters because the temptation to expand the scope — to let the LLM interpret the data, assess compliance, or draft submissions for direct submission without review — is where the risk emerges.

Capital adequacy commentary is one of the strongest use cases. A Basel III capital report requires commentary on the period’s Capital Adequacy Ratio movement, the composition and quality of capital, any material changes to Risk-Weighted Assets, and the institution’s assessment of its capital adequacy relative to its risk appetite. All of this information is available in structured form from the data compilation layer. The LLM receives the prior period figures, current period figures, the allowable movement ranges, and the narrative template; it produces a first-draft commentary that the compliance officer reviews and edits. The editing task — verifying the narrative against the figures, adjusting tone, adding context from management discussions — takes a fraction of the time that drafting from scratch would require.

Liquidity Coverage Ratio and Net Stable Funding Ratio narratives follow the same pattern. The quantitative inputs are well-defined; the narrative explains the drivers of the ratio movement and management’s view of the liquidity position. This is formulaic enough that a well-prompted LLM produces a usable first draft consistently.

Regulatory circular summarisation is a distinct but high-value application. BNM, the SC, and Labuan FSA issue policy documents, consultation papers, and regulatory circulars regularly. Each document requires the compliance team to read it, assess applicability to the institution’s activities, identify specific obligations or changes required, set implementation timelines, and draft an internal communication to relevant business units. For a busy compliance team, this cycle takes several days per significant circular. An LLM that produces a structured summary — key changes, affected product lines, required actions, implementation timeline, suggested owner — from the full regulatory text reduces that cycle to hours. The compliance officer reviews the summary, corrects any errors, adds institutional context, and communicates.

Internal compliance update memos — the internal communications that translate regulatory changes into business unit action — are a natural downstream use case from circular summarisation. The LLM has already summarised the circular; generating a draft memo that communicates the key points in plain language to a non-compliance audience is a small additional step that saves the compliance officer the most time-consuming part of the communication task.

Draft responses to regulatory queries are a higher-sensitivity application, but one that delivers significant value when designed correctly. When BNM issues a query about a specific aspect of the institution’s operations — a request for information on a risk management process, a question arising from examination findings — the compliance team must research the answer, compile relevant data, and draft a coherent response. The LLM, given the query, the relevant internal documentation, and the data, can produce a draft response structure that the compliance officer and legal team then review, supplement, and approve. The research and drafting step — historically the most time-consuming part of a query response — is accelerated. The review, approval, and sign-off remain human.

Where GenAI Should Not Be Trusted Without Review

There are boundaries in regulatory reporting where LLM involvement without human review is not acceptable, and being explicit about these boundaries is as important as identifying the value cases.

Any submission that goes directly to BNM, the SC, or another regulator without human sign-off should not involve LLM-generated content that has not been reviewed by a qualified compliance professional. This is not a risk preference — it is a fundamental accountability requirement. The institution is represented by its regulatory submissions, and the accountability for their accuracy rests with the head of compliance and the board, not with any tool used in preparation.

Interpretation of regulatory requirements is not an LLM task in a production workflow. Whether a new BNM policy document applies to a specific product structure, what “significant transaction” means for the purposes of a particular reporting threshold, or how a new requirement interacts with an existing internal policy — these interpretive questions require legal and compliance expertise. An LLM will produce an answer; the answer may be wrong; and in a regulatory context, acting on a wrong interpretation has consequences that extend beyond fixing the error.

The quantitative data that feeds into regulatory submissions — the numbers in the Basel III tables, the transaction volumes in the JKDM returns, the income figures in the P3 disclosures — must not be generated or modified by the LLM. The LLM generates narrative from structured data; it does not generate or manipulate the structured data itself. This distinction must be enforced at the workflow level, not assumed from the model.

The Workflow That Works

The workflow that delivers the efficiency gains while maintaining regulatory accountability has five steps, and the LLM is in step three.

Step one: Data compilation. The quantitative inputs are pulled from source systems — core banking, treasury, risk engine — by the existing automated or semi-automated data pipelines. This step is not changed by GenAI. The figures are correct, sourced from systems of record, and reviewed by the data owner.

Step two: Structured summary. The compiled data is organised into a structured input document — the figures for the current period, comparison figures from the prior period and prior year, any notable movements that require explanation, and the reporting template requirements. This is the package that the LLM receives.

Step three: LLM drafts narrative. The LLM generates a first-draft narrative for each section of the submission that requires commentary. It receives the structured data, the template structure, and any specific guidance (regulatory style requirements, standard phrasings, prior period narrative for reference). The output is a complete draft narrative, not a summary or bullet points — something the compliance officer can read and edit directly.

Step four: Compliance officer reviews and edits. The compliance officer reads the draft narrative against the figures, corrects errors, adjusts framing where institutional context requires it, and adds any information the LLM could not have included (management commentary from board discussions, context from regulatory conversations, forward-looking statements that require judgment). This step takes 20 to 40% of the time that drafting from scratch would take.

Step five: Sign-off and submission. The reviewed and edited submission goes through the institution’s normal approval workflow — compliance sign-off, CFO or CRO review where required, board approval for annual disclosures. The submission is then made through normal channels. The LLM’s involvement is not disclosed in the submission — it is an internal drafting tool, equivalent to a word processor. The institution is fully accountable for the content.

Audit Trail Requirements

The audit trail for regulatory reporting preparation must satisfy potential regulatory scrutiny — both in the ordinary course of examination and in the event of a query about how a specific submission was prepared.

The audit trail must capture the data inputs used for each submission cycle: the source systems, the extraction date, the figures as received from the data compilation step. It must capture the LLM-generated draft and the version the compliance officer started editing from. It must capture every edit made to the draft, by whom, and when. It must capture the approval chain — who reviewed and approved each section, with timestamps. And it must capture the final submitted version and the submission timestamp.

This is not a new requirement created by GenAI — regulatory reporting teams have always needed to document their processes for examination purposes. What GenAI adds is a new artefact in the chain: the LLM-generated draft. That artefact should be stored and retrievable alongside the other records.

In practice, this means the GenAI tooling must write to a document management or workflow system with appropriate access controls and retention settings, not to individual user file systems or email threads. The compliance function’s existing document retention infrastructure is the right place to capture and store these artefacts.

Practical Results

The efficiency gains from well-implemented GenAI in regulatory reporting narrative drafting are consistent across the deployments we have observed. Teams report 50 to 70% reduction in time spent on narrative drafting for the use cases where the LLM is well-suited — capital adequacy commentary, liquidity narratives, circular summarisation, and internal memo drafting.

The compliance officer’s experience of the work changes meaningfully. Instead of spending the majority of a reporting cycle writing, they spend the majority reviewing, editing, and verifying. For most experienced compliance professionals, this is a more satisfying and cognitively appropriate use of their expertise. The risk of a missed regulatory obligation due to fatigue from narrative drafting — a real operational risk at current reporting volumes — is reduced.

The saving does not manifest as a reduction in compliance headcount. Regulatory reporting volumes in Malaysia have grown consistently over the past decade, and the expectation among compliance professionals is that they will continue to grow. The efficiency gain from GenAI is absorbed by increased volume and by redeployment of compliance time to higher-value activities — regulatory relationship management, policy development, and examination preparation — rather than by headcount reduction.

Accountability Does Not Change

The fundamental principle of regulatory reporting is unchanged by GenAI: the institution is accountable for the accuracy, completeness, and timeliness of its regulatory submissions. The head of compliance is accountable. The board is accountable. The tool used to draft the narrative is not an actor in any accountability framework.

This is not a limitation of the technology. It is the correct allocation of responsibility in a regulated industry. GenAI reduces the labour cost of meeting regulatory reporting obligations. It does not — and should not — reduce the accountability for meeting them.

The institutions that are using GenAI effectively in regulatory reporting are the ones that have been clear internally about this distinction from the outset. The LLM is a drafting tool. The compliance professional who reviews it is the author. The institution that submits it is responsible. That clarity of role is what makes the efficiency gain real and the compliance risk manageable.


See how Nematix drives end-to-end digital banking transformation for financial institutions across Southeast Asia.