← Back to cases
CASE 004  /  BANKING · RETAIL CREDIT

AI scoring for SME loans + a support chatbot grounded in policy docs.

A tier-2 European bank wanted to cut SME loan decision time without compromising audit-readiness, and reduce support load on a 60-agent retail team. We delivered two systems on the same data foundation: an interpretable credit-scoring model and a RAG-powered support chatbot grounded in the bank's own policy library.

−60%DECISION TIME
−35%SUPPORT TICKETS
100%AUDIT TRAIL
6 wkPILOT
Banking interior with glass facade and atrium
TIER-2 BANK · FEB 2026 · COMPLIANCE & PII REDACTED
THE PROBLEM

An SME loan took 9 days to decide. The support queue grew 12% per quarter.

Two adjacent pains. Underwriters were spending 60–70% of their time on data-gathering and rule-checking before they even got to judgement work. SME loan decisions averaged 9 business days; the bank was losing applications to faster fintech competitors.

Meanwhile the retail support team was answering the same 80 questions on repeat — fees, eligibility, documents needed for a refi. Ticket volume was outpacing headcount.

THE INVESTIGATION

Two problems, one data foundation: the bank's own policies and historical decisions.

We mapped both workflows in week one. The credit-scoring side needed an interpretable model — black-box wasn't an option for the regulator — and a clean training set from 8 years of decisioned applications. The chatbot side needed grounding in 340+ policy and product PDFs that were scattered across SharePoint folders.

Both shared the same need: a clean, versioned representation of the bank's policy and decisioning rules, with citations on every output. We built that foundation first.

THE BUILD

Interpretable scoring + a chatbot that cites the policy line it's reading from.

The credit model was deliberately conservative: gradient-boosted trees with SHAP value explanations on every prediction. Underwriters saw the model's score plus the top-5 features driving it, ranked by impact, with the policy clause each feature mapped to. Override was always one click away. Every decision wrote a structured audit log.

The chatbot ran a small RAG pipeline: query → retrieve top-5 policy passages → answer with citations. If the retrieval confidence was low, it routed to a human agent instead of guessing. Customers got either a sourced answer or a fast handoff — never a hallucinated rule.

"The first thing the regulator asked was 'show me a wrong decision and explain why.' We could, in 30 seconds. That's the only reason this got greenlit."
THE PILOT

6 weeks across two branches, 480 SME applications, 12,000 chatbot conversations.

The credit-scoring pilot ran on 480 SME applications across two branches. Decision time dropped from 9 days to 3.6 days; underwriter override rate stabilized at 8% (well within tolerance). Approval/decline accuracy held flat against the historical baseline — the model didn't get bolder, just faster.

The chatbot deflected 35% of support tickets in pilot, with a 4.6/5 user satisfaction score and zero policy-violation flags from the compliance team. Crucially, deflection meant agents handled fewer simple questions and could spend more time on the complex ones.

THE OUTCOME

Numbers that survived the pilot.

  • SME loan decision time9 days → 3.6 days−60%
  • Support tickets deflected35%
  • Underwriter override rate8%
  • Audit-ready decisions100%
  • Chatbot satisfaction (1–5)4.6
YOUR REGULATED WORKFLOW

Need AI that the regulator and your auditors will both accept?

20-min audit. We discuss your specific compliance constraints and tell you whether interpretable AI fits before you invest a euro.

Take 2-min assessment