Search by Categories

image
  • January 19, 2026
  • Arth Data Solutions

Meet India’s Four Credit Bureaus

Meet India’s Four Credit Bureaus (The Way Lenders Actually See Them)

The argument usually starts in a room that was meant for something else.

It’s a credit policy offsite, or a retail strategy workshop.

Someone has presented a slide called “Multi-Bureau Strategy”. Four logos on one page: CIBIL, Experian, Equifax, CRIF High Mark.

The presenter walks through it:

·         “We’re members of all four credit bureaus in India.”

·         “We currently use CIBIL as primary for retail, CRIF for MFI and small-ticket, Experian/Equifax as secondary pulls.”

·         “Hit-rates are comparable; scores are broadly aligned in risk rank.”

Then the discussion drifts into cost:

“Can we reduce spends by limiting multiple pulls?”

“Do we really need two bureaus on every retail file?”

“For small-ticket journeys, can we just pick one and standardise?”

Eventually, someone senior closes it with a line that feels practical and harmless:

“Look, all four bureaus are broadly similar. We just need a sensible mix for pricing and redundancy. Let’s not overcomplicate this.”

The room nods. The slide moves on.

Six months later, the same institution is:

·         Confused by differences in roll forward behaviour across portfolios that look similar

·         Arguing with a partner bank about which bureau to use for a co-lending pool

·         Answering RBI’s questions on why certain segments look riskier in external data than in internal MIS

All in a system where they themselves had said “the bureaus are broadly similar”.

 

The belief: “All four bureaus are more or less interchangeable”

Inside many banks and NBFCs, the working belief sounds like this when you strip away the politeness:

“CIBIL, Experian, Equifax and CRIF High Mark are basically doing the same thing.

As long as we’re on all four, pull a couple of them, and prices are reasonable, we’re fine.”

You can see why this belief survives:

·         RBI licences Credit Information Companies under the same law

·         Every bureau talks about scores, reports, alerts, analytics

·         Internal decks show similar-looking scorecards and ranges

·         Multi-bureau discussions quickly become debates about cost per pull and hit-rate

On most MIS, the bureaus show up as:

·         A row in the vendor cost sheet

·         A slide on hit-rate across CICs

·         A column in the credit policy comparing score cut-offs

The underlying assumption is simple:

same law, same regulators, same market, similar products.

From that vantage point, it’s logical to treat the four bureaus as:

·         Largely interchangeable sources of credit information

·         With minor variations you can smooth over in policy and pricing

The trouble is that this is not how risk behaves in the real portfolios that sit behind those logos.

The differences are not theoretical. They show up in:

·         Which customers each bureau “knows” better

·         How data gets stitched when identifiers are messy

·         What gets reported, and how quickly

·         How your own product mix interacts with each bureau’s coverage history

Early on, these differences are invisible because dashboards compress everything into averages.

Later, they show up as:

·         Odd performance in specific vintages

·         Partners preferring one bureau over another for reasons they can’t fully articulate

·         Inspection questions that assume you understand your own multi-bureau footprint better than you actually do

 

What actually happens when you live with four bureaus

On paper, “India’s four credit bureaus” is a simple sentence.

In day-to-day life inside a lender, it’s messier.

Different histories, different strengths, different blind spots

If you sit with someone who has been building scorecards and strategy using bureau data for a decade, they rarely speak in averages.

In one large bank’s analytics team, an internal note on bureau usage didn’t look like a marketing comparison. It looked like this (paraphrased):

·         “For salaried urban retail, CIBIL hit-rate high, vintage deep; Experian and Equifax usable as second opinions.”

·         “For MFI and JLG segments, CRIF coverage noticeably better; others lag in certain geographies.”

·         “Certain commercial segments show odd behaviour: thin files in one CIC, fuller tradelines in another.”

None of this is about one bureau being “good” and another “bad”.

It is about where each one has historically accumulated strength:

·         CIBIL’s early start and long presence in formal banking

·         CRIF High Mark’s history with microfinance and small-ticket credit

·         Experian and Equifax building depth in specific lender cohorts and product mixes

If you treat the four as identical, these differences don’t disappear.

You just stop noticing them.

The same customer can look slightly different depending on who is watching

In one NBFC’s risk lab exercise, they pulled bureau data on a sample of 50,000 existing customers across all four CICs.

They found:

·         A non-trivial percentage where hit / no-hit status differed across CICs

·         Cases where number of open tradelines differed

·         DPD histories that matched directionally, but not always in exact timing

When they overlaid this with performance:

·         Some combinations of “customer + bureau” were early warning friendly

·         Others were surprisingly late in reflecting stress that internal systems already knew

Again, not because any bureau was doing something wrong.

Because:

·         Each CIC’s coverage, matching logic, and file update timing is different in practice

·         Lenders themselves report in slightly different ways to each CIC over time

When RBI or another lender pulls from one CIC and you internally lean on another, you are not always looking at the same photograph.

Your own reporting behaviour creates different realities in each bureau

Internally, reporting is often treated as a single process:

“We report to all four bureaus as per RBI guidelines.”

In practice, the wiring looks like:

·         Legacy products that started reporting to one CIC first, others later

·         Co-lending arrangements where a partner reports differently to different CICs

·         Corrections that get pushed quickly to one CIC and lag on another due to file schedules

In one institution, a reconciliation exercise across CICs for a single portfolio showed:

·         Over 95% of accounts aligned across all four

·         But the remaining few percent had non-trivial differences in closures, DPD buckets, and restructure flags

These are exactly the kind of edge cases inspection teams like to explore.

If your internal stance is “all four are the same”, you are often not prepared for a conversation where RBI has sampled behaviour from one bureau that you don’t watch as closely.

 

Why the “all similar” belief survives for so long

If these differences are real, why doesn’t the assumption break earlier?

Partly because of how information is presented to senior management.

Most senior forums see only aggregate multi-bureau metrics

In a typical Board or Risk Committee deck, the “bureau” slide will have:

·         Overall hit-rate by bureau

·         Score distribution across one or two primary CICs

·         Maybe a comparison of average scores across bureaus for a sample segment

This is what fits into one or two slides.

What doesn’t make it in:

·         Segment-wise coverage differences (by geography, product, ticket size)

·         How many accounts have materially different views across CICs

·         Which bureau is used for which critical decision (onboarding, pricing, line increase, EWS, collections)

So at the top table, the four logos look almost interchangeable.

The nuances live in analyst laptops and occasional internal workbooks.

Procurement and cost discussions flatten nuance even further

In commercial reviews, bureaus show up as line items:

·         Cost per pull

·         Minimum billing

·         Volume commitments

·         Potential discounts for higher usage

Those conversations naturally push towards simplification:

·         “Can we standardise on CIBIL + one more?’’

·         “Do we really need to pull across all CICs for this segment?”

·         “Can we treat others as backup if CIBIL is down?”

The moment the discussion is framed primarily as a cost question, any nuance about portfolio behaviour across bureaus becomes a distraction.

No one is explicitly accountable for “multi-bureau strategy”

There is rarely a named owner whose job description says:

“Responsible for how our institution uses and understands all four credit bureaus.”

Pieces sit with:

·         Risk – for policy, cut-offs, strategy

·         Analytics – for scorecards, segment understanding

·         IT and Operations – for reporting and integration

·         Procurement – for commercials

Everyone has some part of the picture.

Nobody is accountable for the whole.

Under that structure, the simplest narrative survives:

“They are broadly similar. We are on all four. Let’s not overthink this.”

Until reality forces you to.

 

How more experienced teams actually think about the four bureaus

The institutions that seem more comfortable in inspection rooms and partner discussions don’t romanticise or demonise any bureau.

They just refuse to say “all four are similar” without qualification.

A few behaviours stand out.

They build a simple internal map of “who sees what better”

In one large lender, there was a slide that kept recurring in internal risk discussions that never made it into external decks.

It wasn’t pretty. It had:

·         Rows for key segments: salaried urban retail, self-employed, MFI/JLG, small business, credit cards, BNPL-type products

·         Columns for the four CICs

·         For each cell: a short, blunt internal comment like:

o        “Deep coverage; behaves as expected.”

o        “Coverage improving; some gaps in certain states.”

o        “Thin; treat as secondary only.”

Nobody pretended this map was precise.

It was updated once a year based on:

·         Hit-rates

·         Internal back-testing

·         What partners and bureau teams themselves were seeing

The point wasn’t to rank bureaus.

It was to avoid the lazy sentence “they are all similar” when making decisions about:

·         Which bureau to anchor a scorecard on

·         Where to insist on two CICs instead of one

·         How to explain differences when RBI or a partner cited a different CIC in a discussion

They choose primary and secondary roles consciously, not by habit

Instead of letting history decide (“we’ve always used CIBIL first”), a few teams have made conscious choices like:

·         For certain mass segments: one CIC designated as primary for underwriting, another for periodic cross-check and portfolio analytics

·         For riskier or more opaque segments: mandatory dual-bureau pulls with clear rules on which signal dominates in case of conflict

·         For co-lending pools: a documented, shared view with the partner bank on which CIC is reference for onboarding and monitoring

These are not grand strategies.

They are just explicit decisions that show up in:

·         Credit policy

·         Product notes

·         Co-lending agreements

So when questions arise, the answer is not “this is how the switch is wired”.

It is “this is what we decided and why”.

They use bureau differences as a diagnostic, not a nuisance

In one NBFC, the analytics team ran a simple monthly report that never went to the Board:

·         For a sample of accounts, compare key attributes across two CICs:

o        Number of open tradelines

o        Max DPD in last 12 months

o        Presence of recent write-off or settlement

They weren’t trying to prove one bureau wrong.

They were looking for:

·         Patterns where their own reporting might be inconsistent

·         Product/partner combinations where certain CICs were clearly behind

·         Early warning signs that some segments were becoming more active elsewhere than their internal views suggested

This took some effort.

But when RBI asked, “Have you observed any differences in behaviour across CICs?”, they had something better than a shrug.

 

A quieter way to think about “meeting” India’s four bureaus

It’s tempting to treat CIBIL, Experian, Equifax and CRIF High Mark as:

·         Four vendors under the same regulation

·         Selling similar products

·         With differences you can mostly ignore at the portfolio level

If you do that, “meeting India’s four credit bureaus” will remain a slide with four logos and a cost table.

If you accept that:

·         Each bureau has its own history, coverage shape and data quirks

·         Your own reporting behaviour makes each one see you differently

·         Partners, competitors and RBI may not be looking at the same CIC you lean on internally

then the question of “meeting” them becomes less about names and more about understanding.

At that point, the useful internal question is no longer:

“Are we members of all four credit bureaus in India, and are our costs under control?”

It becomes:

“If we sat down with each of the four bureaus and asked them to describe our portfolio back to us,

would we recognise ourselves in what they say —

and would we understand why their stories are not exactly the same?”