Search by Categories

image
  • January 16, 2026
  • Arth Data Solutions

RBI’s Concerns: Complaints, Data Quality, Governance

RBI’s Concerns: Complaints, Data Quality, Governance

By the time “RBI concerns” came up in the meeting, most people were already thinking about lunch.

It was the quarterly Board Risk Committee.

Slide 42 of 57: “Customer Complaints & Data Quality”.

The Chief Service Officer walked through it in three minutes:

·         “Total complaints down 8% vs last quarter.”

·         “RBI and Ombudsman cases within historical range.”

·         “Bureau disputes closed within SLA at 97.6%.”

Next slide, from IT:

·         “Core system data-quality exceptions reduced from 1.4% to 0.9% of records.”

The CRO added a line to reassure the room:

“We don’t see anything here that would significantly worry RBI. Governance structures are in place.”

Nobody asked what “governance structures” meant in practice.

Nobody asked if any of those complaints or exceptions had actually changed a credit decision.

Two weeks later, an inspection team’s interim note arrived.

The polite phrasing masked a harder message:

·         Persistent patterns in credit-information complaints

·         Weak traceability on data corrections across systems and CICs

·         Governance committees that “noted” issues but didn’t record clear ownership or follow-through

Exactly the areas everyone had just brushed past in the Board meeting.

The internal assumption had been simple:

“As long as we close customer complaints in SLA, keep data-quality exceptions low, and show a governance chart, RBI won’t have major concerns.”

That assumption feels reasonable.

It’s also why the gap keeps surprising people who should not be surprised.

 

The belief: “SLA, low exception rates, and a governance chart are enough”

In a lot of banks and NBFCs we sit with, the line sounds like this when you strip the politeness away:

“RBI knows there will always be some complaints and data issues.

If we show low volumes, decent SLAs, and a formal committee structure, they won’t worry too much.”

You can see how that belief forms:

·         Complaints are tracked as a service metric

o        Total complaints

o        Complaints per 10,000 accounts

o        Turnaround time, especially for RBI / Ombudsman cases

·         Data quality is tracked as an IT/operations metric

o        Records failing validation in the bureau file

o        Exceptions in daily ETL jobs

o        Number of incidents raised and closed

·         Governance is shown as an organogram

o        Risk Committee

o        IT Strategy Committee

o        Customer Service Committee

o        “Data Governance Council” created after an audit observation

On dashboards, this looks fine:

·         Complaints trending down or flat

·         Exceptions “within tolerance”

·         Committees meeting “as per calendar”

Early on, RBI letters and inspection reports don’t say much more than:

·         “Please ensure timely disposal of customer complaints.”

·         “Data quality issues noted; management is advised to strengthen controls.”

It’s easy to translate that as: “Keep the numbers low, keep the SLAs green, we’re okay.”

What’s actually happening is different.

RBI is not just looking at counts, SLAs and charts.

It is using those three areas — complaints, data quality, governance — as signals of how seriously you treat responsibility for credit information and customer impact.

That’s where the mismatch sits:

·         What’s going wrong: complaints, data issues and governance are treated as separate, low-level topics.

·         Why it stays invisible: dashboards show volume and speed, not what went wrong or how it was fixed.

·         What it costs: months of rework after inspections, tougher conversations on credibility, and a much smaller say in how regulators interpret your intentions.

 

What actually happens behind “green” complaints, data, and governance

When you watch these three areas over six to twelve months instead of one slide, a different picture shows up.

Complaints: the noise you don’t listen to

In one large retail-focused NBFC, the Customer Service Committee met every month.

On the standing MIS:

·         Total complaints by channel

·         Average resolution time

·         Top three categories by volume

Credit information issues — “wrong CIBIL”, “loan closed but still open”, “DPD incorrect” — showed up as a thin slice, maybe 4–6% of total complaints.

The narrative in the room was predictable:

“Volumes are low. These are mostly perception issues; the customer doesn’t understand the report.

Our dispute closure SLA with CICs is above 95%. Nothing alarming.”

What was not on the slide:

·         Repeat complaints from the same customer about the same issue

·         Cases where internal systems had been corrected, but the CIC data lagged by one or two cycles

·         Instances where status differed across two bureaus for the same loan

Those lived in email trails between branches, call centres, operations teams and the bureau liaison unit.

When an RBI inspection team later sampled consumer complaints, they didn’t care that complaint volumes were “only 5% credit-information related”.

They cared that:

·         Some cases took three cycles to fully correct across all CICs

·         Internal records and bureau reports diverged for longer than they should have

·         There was no central view of patterns in those disputes

From RBI’s side, “complaints” had quietly turned into:

“Evidence that your credit information reporting and correction mechanisms may not work as cleanly as your policies suggest.”

The green SLA metric hadn’t captured that.

Data quality: treated as plumbing, not as a risk statement

In one mid-sized bank, the Data Quality MIS circulated weekly on email:

·         Number of failed records in the bureau extract

·         Top three validation errors

·         Status of open data-quality incidents in the core system

IT and operations teams argued about whether the 1.2% failure rate could be forced below 1%.

Risk and business were mostly absent from the conversation.

Under the surface, the failures were not evenly distributed:

·         Newer products introduced during the previous two years had higher error rates

·         Co-lending pools and acquired portfolios were over-represented

·         Certain restructuring and write-off codes were frequently misclassified

To the team maintaining the extract, these were just issues to be fixed:

·         Map code A to B

·         Add a transformation rule

·         Back-fill a missing field

To RBI, when they looked at cross-sections of data across multiple lenders, those same patterns suggested:

·         Weak understanding of how certain products should be reported

·         Poor control over special situations like restructuring

·         Inconsistent behaviour across institutions in similar segments

It is one thing to tell RBI, “We have a 0.9% error rate, down from 1.4%.”

It is another when they see that in that 0.9% sit a disproportionate number of high-risk, sensitive, or policy-grade accounts.

The metric doesn’t say that.

The sample does.

Governance: comfortable meetings that avoid the hard questions

On paper, governance looks tidy:

·         Data Governance Council formed

·         Quarterly meetings held

·         Minutes recorded

·         Attendees: CRO, CIO, Head of Operations, Head of Analytics, sometimes the CCO

When you sit in these meetings, a pattern often emerges:

·         Issues are presented as “technology items” or “process improvements”

·         Ownership is vague — “jointly with IT and Operations”, “to be reviewed by business and risk”

·         Deadlines are “next meeting” or “next quarter”

One internal minute we saw captured a discussion on bureau reporting like this:

“The Council noted that occasional discrepancies between core system and CIC data exist.

IT and Operations to study and suggest improvements.

Council to be updated in next meeting.”

Three months later, the minutes read:

“IT and Operations presented broad approach for improving CIC data consistency.

Council appreciated the efforts and advised to continue.

No specific action items recorded.”

When RBI later reviewed these minutes as part of an inspection, the reaction was subdued but clear:

·         Issues were acknowledged but not owned

·         No concrete decisions, timelines or accountability

·         No link between governance discussions and actual risk outcomes

From the institution’s side, governance was “in place”.

From RBI’s side, it was ceremonial.

 

Why RBI’s real concerns don’t show up in your dashboards

If complaints, data quality and governance are being tracked and committees are meeting, why do RBI observations still feel sharper than expected?

Because what’s being measured internally is not what RBI is reading.

Dashboards show speed and count, not truth and correction

Service MIS shows:

·         Complaints by volume, channel, and SLA

·         RBI / Ombudsman cases with escalation status

It rarely shows:

·         How many complaints were actually valid

·         How many required changes to internal data

·         How often those changes flowed through to all CICs and all systems

Data-quality MIS shows:

·         Exceptions as a percentage

·         Number of records fixed

·         Incident closure time

It rarely shows:

·         Where in the lifecycle the errors originated

·         Which products, partners or branches are repeat offenders

·         Whether the fixes changed any reported numbers to RBI or CICs

Governance decks show:

·         Committee calendar

·         Attendance

·         Number of issues “discussed”

They rarely show:

·         Decisions that materially changed behaviour

·         Issues that were escalated because they could not be resolved at lower levels

·         Places where governance chose not to compromise, even when it hurt in the short term

From inside, the picture looks under control.

From outside, RBI sees a set of symptoms that suggest something different:

·         Complaints that are technically “closed” but keep reappearing in a different form

·         Data corrections that patch the symptom, not the cause

·         Committees that hear everything and decide nothing

Early warnings are handled too low in the hierarchy

The first people to see RBI-grade concerns are almost never in the Board Room:

·         Call centres and branches see customers who insist their report is wrong

·         Operations teams see manual workarounds grow in certain portfolios

·         IT teams see repeated failures in the same data pathways

·         Mid-level risk or analytics staff notice that certain corrections subtly change portfolio statistics

But their reporting line is:

·         To a service manager whose KPI is “complaints within SLA”

·         To an IT lead whose KPI is “incidents closed”

·         To an operations manager whose KPI is “files sent on time”

None of those KPIs are about “what would this look like to RBI six months from now”.

So the signals are handled locally, even when they are systemic.

By the time they show up in an inspection report as “concerns on complaints, data quality, and governance”, senior leadership is genuinely surprised.

They shouldn’t be. But given how the information is organised, it is almost inevitable.

 

What more experienced teams quietly do with these three themes

Banks and NBFCs that cope better with RBI’s concerns in this area don’t have dramatically better technology or zero issues.

They just refuse to treat complaints, data quality and governance as three separate boxes.

A few patterns repeat.

They let complaints feed into risk, not just service

In one lender, the CRO asked for a simple thing:

·         Once a quarter, the Risk Management Committee would see a one-page view of credit-information complaints.

Not all complaints. Only these:

·         Number of complaints where internal data was actually wrong

·         How many led to changes in bureau data

·         How many were repeat complaints for the same customer or branch

·         How long it took to fully correct the issue across all CICs and internal systems

It was a rough page, prepared jointly by customer service and risk.

But over two or three cycles, patterns emerged:

·         Certain products had more genuine issues than others

·         Some branches had a habit of “closing” cases internally before external corrections finished

·         A co-lending portfolio generated more bureau disputes than its size justified

Nobody celebrated this as a new initiative.

It just meant that complaints were no longer treated as background noise.

They treat data quality as a statement they are making to others

In another institution, the data-quality view for bureau reporting was changed in a small but important way.

The MIS didn’t stop at “0.8% failed records”.

It highlighted:

·         How many failed records were from newer products vs legacy products

·         How many involved DPD, status, closure or write-off fields

·         Whether those corrections changed the borrower’s visible risk profile

That shift changed the conversation.

When the CRO saw that a chunk of exceptions involved accounts in higher-risk buckets or special situations, the debate stopped being about whether 0.8% was “within tolerance”.

It became about:

“Are we comfortable mis-stating the behaviour of these specific accounts to the rest of the system?”

The answer was usually no.

Work got done accordingly.

They make governance minutes specific enough to be traceable

In some places, governance meeting minutes read very differently.

Instead of:

“Council noted data-quality issues and advised to strengthen controls.”

You see entries like:

·         “Council decided that all DPD corrections above 30 days will require sign-off from Collections Head and CRO’s delegate.”

·         “Council agreed that no product will go live without a documented bureau reporting and correction flow; CIO and CRO jointly accountable.”

·         “Council closed the open item on co-lending bureau status for X portfolio; any deviations to be reported in the next meeting with specific counts.”

These are not grand statements.

They are small, concrete decisions.

When RBI later looks at minutes, they can see:

·         Issues raised

·         Decisions made

·         People named

·         Follow-through recorded

It doesn’t make the problems vanish.

It shows that governance is not just listening to problems; it is changing behaviour.

 

A quieter way to read RBI’s concerns on complaints, data quality and governance

If you keep the assumption that:

“RBI just wants low complaint volumes, decent SLAs, and visible committees,”

then your effort will stay focused on:

·         Keeping the service dashboard pretty

·         Keeping exception rates low

·         Keeping governance charts up to date

You will still get observations that feel sharper than you think you deserve.

If you accept that, in these three areas, RBI is really asking a different question:

·         Do you listen when customers tell you your data is wrong?

·         Do you understand the implications when your data doesn’t reflect reality?

·         Do your governance forums change anything, or just record that they met?

then complaints, data quality and governance stop being side topics.

They become three places where your institution either proves that it takes responsibility for credit information seriously — or quietly shows that it doesn’t.

At that point, the useful internal question is no longer:

“Are our complaints, data-quality exceptions, and governance trackers green?”

It becomes:

“If RBI only looked at how we handle complaints about credit information,

the quality of what we send out,

and the decisions our governance actually lands,

would they see a bank that is merely compliant — or one that understands what it owes the system?”