Search by Categories

image
  • January 26, 2026
  • Arth Data Solutions

Which Score Do Lenders Rely On?

Which Score Do Lenders Actually Rely On?

The question never shows up at the start of a project.

It arrives late.

Usually when something has already gone wrong.

A retail NPA review.

A co-lending dispute.

An inspection sample that doesn’t behave the way the deck said it would.

Someone puts a case on the screen:

Customer took an unsecured loan 14 months ago, looked fine at the time, rolled quickly, is now written off.

The senior person in the room asks a simple question:

“Which score did we rely on when we booked this customer?”

The credit file comes up.

It shows:

·         CIBIL 762

·         Experian 735

·         Internal application score: “A–”

·         Behaviour score at time of line increase: “B+”

·         A field called “Final Risk Grade: RG–3”

The LOS screen has one view.

The core system shows another.

The co-lending partner’s note, for the same customer, references only their preferred bureau.

After a few minutes of confusion, someone says the line that closes the discussion without really answering it:

“We are essentially a CIBIL-first shop. Internal score and other bureau scores are supporting views. Overall the profile was acceptable.”

Everyone nods.

The meeting moves back to flows and provisions.

The question “Which score do we rely on?” is parked as if it was philosophical, not practical.

It isn’t.

 

The belief: “We have a primary score; everything else is just support”

Inside many lenders, the working assumption sounds like this when you strip away the polished language:

“We are a CIBIL-750 shop.”

or

“We rely on our internal risk score; bureau is hygiene.”

You hear versions of it everywhere:

·         In product approval memos:

“Primary decision driver is internal application score; bureau to be used for hygiene checks and exclusions.”

·         In credit policy decks:

“Minimum CIBIL score 730 (or equivalent in other CICs).”

·         In steering meetings:

“We use one main score; we don’t want to confuse the field with too many numbers.”

The notion is comforting:

·         There is one main score.

·         It lives in policy.

·         Everything else is “support”.

That belief feels tidy for three reasons:

1.      It gives the impression of consistency: “We treat similar risk the same way.”

2.      It makes cross-bureau conversations easier: “We have equivalent cut-offs.”

3.      It keeps governance short: “Our model inventory is large, but our decision anchor is clear.”

The problem is that when you follow actual journeys through systems and screens, decisions are rarely that clean.

Most institutions are not “CIBIL-first” or “internal-score-first”.

They’re “whatever score the wiring happened to favour in that journey” — plus manual judgement layered on top.

The gap between those two realities is what comes back in reviews, partner calls and inspection rooms.

 

What actually happens to scores between journey and decision

If you sit with the people who run originations and risk systems, and walk one product end-to-end, the clean “one primary score” story starts to fray quite quickly.

A typical unsecured retail journey today touches at least four different scoring points:

·         Pre-screen: often a bureau score, sometimes from a single CIC.

·         Application score: internal model, blending bureau and input data.

·         Policy blocks: rule engine reading bureau attributes directly.

·         Final decision grade: sometimes a composite field built inside LOS or CBS.

Layer on co-lending or partnerships, and you add:

·         A partner’s preferred bureau score.

·         A partner’s internal score (if they share it).

On paper, the institution can still say:

“Primary score: internal application score. Bureau score: hygiene.”

In practice, different parts of the stack are relying on different numbers at different times.

A few concrete patterns show up repeatedly.

1. The score that triggers eligibility is not always the score that gets remembered

In one bank’s digital flow:

·         The front-end journey used a simple bureau cut-off for pre-eligibility: “CIBIL ≥ 730”.

·         The LOS calculated an internal application score, which actually drove sanction and pricing.

·         The core banking system stored a final grade (“RG–2”, “RG–3”) derived from the application score, but did not retain the underlying numeric value.

In internal reviews and dashboards:

·         Eligibility discussions referenced CIBIL cut-offs.

·         Risk performance decks referenced internal score bands.

·         Account-level views in the core showed only risk grades, with no trace of which bureau had opened the door.

So when an NPA case was examined a year later, the sanction note showed:

·         “CIBIL: 741”

·         “Final Grade: RG–3”

The internal application score that actually drove the decision wasn’t visible anywhere outside the LOS logs.

On slides, the institution was “internal-score-first”.

In practice, the first yes often came from a bureau gate that nobody tracked beyond “pass/fail”.

2. Channel-specific journeys quietly prioritise different scores

In another NBFC, two channels existed for the same product:

·         Branch channel:

– Credit officer saw bureau report, internal score, and a summary recommendation.

– Policy said: “Internal score is primary; bureau for hygiene and overrides.”

·         Aggregator / fintech channel:

– Partner’s platform screened on their preferred bureau score.

– The NBFC’s system ingested an already filtered pool, then ran its own internal score “for compliance”.

In branch-originated cases, decision notes reflected internal score bands.

In partner-originated cases, many sanction notes simply said:

“Partner-screened; internal checks OK; bureau acceptable.”

When losses rose in one region, the initial instinct was to blame the internal score.

Later, a deeper look showed:

·         Partner flows were effectively CIC-first, with internal score functioning as a soft gate.

·         Branch flows were closer to internal-first, with bureaus used to block known risk types.

On paper, the product had one primary score.

In reality, it depended on where the customer came from.

3. Co-lending and buyouts create two “truths” for the same customer

Co-lending makes the “which score?” question even more tangled.

In one case:

·         The NBFC used CIBIL as primary for its own origination.

·         The partner bank’s policy referenced Experian as the main bureau for that segment.

·         The co-lending agreement was silent on which score would be the reference.

For shared customers:

·         The NBFC’s system stored internal score + CIBIL.

·         The bank’s system stored internal score + Experian.

During a later performance review on that pool:

·         The bank pointed to Experian-based distributions and raised concerns.

·         The NBFC defended using CIBIL-based histories and cut-offs.

Both parties had evidence.

Neither had agreed upfront which mirror they would treat as the main one.

In the background, RBI’s questions on co-lending began to include lines like:

“Please indicate which credit information source and score was relied upon at origination and for subsequent monitoring.”

That wasn’t a technical query.

It was a question about ownership of the decision.

 

Why the “primary score” myth survives inside institutions

If reality is this messy, why do so many organisations still talk confidently about “our main score”?

Part of the answer lies in how information is presented to senior forums.

Credit policy and Board decks can’t show the wiring

A typical credit policy summary slide has room for:

·         “Primary score: Internal risk score vX.X”

·         “Minimum bureau score: CIBIL 730 (or equivalent)”

·         “Override thresholds and authority matrix.”

There is no room to describe:

·         Different treatment by channel.

·         Situations where partner scores influence decisions.

·         How policy blocks bypass scores altogether for some cases.

A typical Board Risk Committee pack will show:

·         GNPA / flow rates by internal score bands.

·         Sometimes, loss rates by bureau band.

There is rarely a slide that bluntly says:

“Here is where we relied mostly on bureau X.

Here is where internal score truly drove decisions.

Here is where partner scores had more influence than we admit.”

So at the top table, it is easy to keep repeating:

“We are a [primary score] institution.”

Model governance focuses on individual models, not combinations

Model governance committees do important work:

·         Approving scorecards.

·         Reviewing validation statistics.

·         Ensuring documentation and monitoring.

But they usually review models one at a time:

·         Application score v3.1 – approved.

·         Behaviour score v2.0 – approved.

·         New bureau-based PD model – approved.

What they rarely see clearly is:

·         How these models interact in a real decision flow.

·         Which one actually dominates when scores disagree.

·         Where a supposedly “supporting” score ends up driving a hard policy block.

So every model is “governed”.

The combination is often not.

Systems and screens hide which score was in charge

Front-line tools matter a lot.

If a credit officer’s screen shows:

·         Bureau score in large text.

·         Internal score as a small label.

·         Final recommendation colour-coded based on one of them.

You can predict which number will stick in their mind.

If the sanction note template has a single field called “Score” and officers can choose which one to type in, you’ve already lost the traceability battle.

Later, when you ask “Which score did we rely on?”, you’ll be reading informal habits, not system logs.

 

How more experienced teams answer “Which score do we rely on?”

The institutions that handle this quietly better don’t have fewer scores.

They just accept that “Which score?” is not a philosophical question, it’s a design choice.

Three practical things stand out.

1. They decide, per decision, which score is actually in charge

For a given segment or product, they write down — in plain language — answers to questions like:

·         For onboarding, if bureau and internal score disagree, who wins?

·         For line increase, does behaviour score override original application score?

·         For pricing, is the main input bureau, internal, or a composite grade?

·         For co-lending, whose bureau and score is the reference: ours, the partner’s, or both?

This doesn’t become a slogan.

It becomes a small internal note or an annex in policy:

·         “Primary decision driver: Internal AppScore v3.1.

Bureau scores used for exclusions and fraud rules; no approvals allowed above internal cut-off solely on bureau strength.”

·         “For co-lending pool X: reference bureau – Experian.

Both parties agree to treat Experian score as primary for origination and monitoring.”

When someone asks “Which score did we rely on?” for a particular decision type, there is at least a documented starting answer, not a memory game.

2. They ensure the system logs what the slide claims

One lender made a blunt change after a painful review.

For each approved application, the LOS now stores a “decision snapshot”:

·         CIC name(s) and score(s) at decision time.

·         Internal application score.

·         Behaviour score (if applicable, for repeat customers).

·         The name of the score that triggered the final accept/reject.

·         Any override reason.

This snapshot is:

·         Written once, at the point of decision.

·         Not recalculated later.

·         Accessible in reviews and, if needed, in regulatory queries.

It isn’t perfect.

But when a case is pulled in an NPA review or an inspection:

·         The institution doesn’t have to reconstruct logic from multiple logs.

·         They can say, factually:

“Decision was taken with InternalScore=642, CIBIL=738, Experian=712.

System’s decision driver was InternalScore.

Bureau scores were above minimum thresholds; no bureau-based override was used.”

It doesn’t make the decision right.

It makes it explainable.

3. They treat “score confusion” as a risk signal, not an embarrassing detail

One NBFC started running a simple internal check on a sample of declined and approved applications every quarter:

·         For declined cases:

– Did any other available score (another bureau, internal score) indicate “accept” territory?

– If yes, was there a deliberate reason to rely on the score that said “decline”?

·         For approved cases:

– Did any other available score indicate “clear risk”?

– If yes, why did the final decision still land as “approve”?

They weren’t trying to second-guess every decision.

They were watching for patterns like:

·         A channel that effectively uses only bureau, ignoring internal scores.

·         A partner-originated flow where internal scores rarely influence outcomes.

·         A segment where reliance on one CIC leads to systematically worse vintages.

When such patterns appeared, the outcome wasn’t a blame session.

It was often a small, clear change:

·         “From next quarter, this partner journey will also capture and log InternalScore, and any approval where InternalScore < X will require explicit justification.”

·         “For this product, behaviour score must be present and in range before line increase, even if bureau is clean.”

Slow, unglamorous corrections.

But over time, the answer to “Which score do we rely on here?” became less fuzzy.

 

A quieter way to hold the “which score” question

It is tempting to keep repeating:

“We are a CIBIL-750 shop.”

“We rely on our internal score; bureaus are supporting signals.”

Those lines make sense in slides and press quotes.

But when:

·         Portfolios misbehave,

·         Partners question your judgement,

·         Regulators ask how conscious your multi-bureau usage really is,

the question “Which score do you rely on?” stops being rhetorical.

Most institutions have more scores than they admit and less clarity than they think.

The point is not to pick a single winner and pretend everything runs through it.

The point is to know, in unglamorous detail:

·         For each important decision type,

·         In each important segment,

·         Through each important channel,

which score was actually in charge —

and whether you are comfortable with that, once you see it in the cold light of hindsight.