View all articles

How to Build an AI Center of Excellence in BFSI: Executive Playbook for 2026

March 24, 2026
Ali Hafizji
Founder and CEO
Contents

Most AI initiatives in BFSI stall before they reach production. A 2024 GAO report found that while federal AI use cases doubled year-over-year, the vast majority remain stuck in pilot phase. Banks and insurers face the same pattern: scattered experiments, no shared infrastructure, and models that never make it past proof-of-concept.

The difference between organizations that scale AI and those that do not usually comes down to structure. This playbook covers the five steps to build an AI CoE in BFSI, the operating models that work, governance requirements specific to Indian regulators, and the use cases where CoEs typically deliver measurable ROI. February 2026

What is an AI Center of Excellence

An AI Center of Excellence (AI CoE) is a centralized team that owns AI strategy, talent, infrastructure, and governance for an entire organization. In BFSI, building one typically involves securing executive sponsorship, assembling cross-functional talent, establishing data governance, and selecting high-impact use cases with clear ROI.

The CoE exists to solve a coordination problem. Without it, AI projects scatter across departments. Three teams build similar fraud models, none reach production, and no shared infrastructure exists to deploy any of them.

  • Strategy alignment: The CoE connects AI projects to business goals like fraud reduction or faster underwriting
  • Talent consolidation: Data scientists, ML engineers, and domain experts work under one structure
  • Governance hub: The team sets standards for model development, ethics, and compliance
  • Knowledge sharing: Centralization prevents duplicated experiments and wasted effort

Why BFSI needs a dedicated AI CoE

Banks and insurers face pressures that retail or manufacturing companies do not. Regulatory scrutiny is constant, data sensitivity is extreme, and fintechs are moving faster.

  • Regulatory scrutiny: RBI, IRDAI, and the DPDP Act require explainability and auditability for models that touch customer decisions
  • Data sensitivity: Customer financial data demands strict access controls, and model governance cannot be an afterthought
  • Legacy infrastructure: Most banks and insurers run fragmented core systems that complicate AI integration
  • Competitive urgency: McKinsey's analysis of 600+ AI initiatives confirms neobanks and digital insurers ship AI features faster because they started with modern stacks

I have seen this pattern across multiple engagements. BFSI organizations that treat AI as disconnected pilots rarely scale beyond proof-of-concept — over 80% of AI projects fail according to RAND Corporation, double the rate of non-AI IT projects. The ones that build a dedicated CoE early move faster and encounter fewer compliance surprises.

AI CoE vs Data CoE vs Cloud CoE

These three centers of excellence serve distinct purposes, though they often get conflated. Clarifying boundaries upfront prevents turf wars and duplicated work.

CoE Type Primary Focus Typical Ownership
AI CoE Model development, MLOps, AI ethics, use case prioritization Chief Data Officer or Chief AI Officer
Data CoE Data quality, cataloging, pipelines, data democratization Chief Data Officer or VP of Data Engineering
Cloud CoE Cloud architecture, migration, cost optimization, security CIO or VP of Infrastructure

The AI CoE depends on the Data CoE for clean, accessible data. It depends on the Cloud CoE for scalable infrastructure. Collapsing all three into one team usually creates bottlenecks rather than efficiency.

Five steps to build an AI CoE in BFSI

1. Secure executive sponsorship and board mandate

AI CoE success depends on C-suite air cover. A board mandate signals organizational priority and unlocks budget.

BFSI boards increasingly expect AI literacy and risk oversight. Framing the CoE as a risk management function often resonates more than framing it as an innovation lab. Without executive sponsorship, the CoE becomes a side project, and side projects in large organizations tend to die within three months.

2. Define roles and build your core team

Team composition matters more than headcount. You can start lean, but you cannot start without the right mix of skills.

  • AI/ML engineers: Build and deploy models
  • Data scientists: Develop algorithms and run experiments
  • Domain experts: Translate banking or insurance context into model requirements
  • MLOps engineers: Manage model lifecycle, monitoring, and retraining
  • Ethics and compliance specialists: Ensure regulatory alignment from day one

Domain experts are often undervalued. A fraud model built without input from someone who understands transaction patterns in your specific business will likely miss edge cases that matter.

3. Establish infrastructure and MLOps foundations

The technical backbone includes cloud or hybrid environments, feature stores, model registries, and CI/CD pipelines for ML. BFSI organizations often face data residency requirements that push them toward hybrid or on-premise infrastructure.

Many organizations get stuck here. Legacy systems, technical debt, and fragmented data pipelines slow down infrastructure setup. If modernization efforts are blocked, addressing those constraints first often makes more sense than forcing AI infrastructure on top of unstable foundations.

4. Design governance and model risk frameworks

Governance is not optional in regulated industries. BFSI organizations already have model risk management (MRM) practices for credit models, and AI models fit into the same framework.

Key elements include model validation, bias testing, explainability requirements, and audit trails. The goal is making compliance a built-in part of the model lifecycle rather than a gate at the end.

5. Select pilot use cases and demonstrate business value

Start with use cases that have clear ROI and manageable risk. Internal operations or back-office processes are often safer starting points than customer-facing AI.

Measuring outcomes is critical. The first pilot builds credibility for scaling. If you cannot demonstrate business value from the pilot, securing budget for the next phase becomes difficult.

Operating models for an AI centre of excellence

The structure you choose affects speed, consistency, and scalability. There is no universally correct answer.

Centralized model

All AI talent and projects sit in one team. This works well for early-stage CoEs or smaller organizations. The risk is that the central team becomes a bottleneck as demand grows.

Federated model

AI talent is embedded in business units with loose coordination. This works well for large, diversified BFSI groups where each business unit has distinct requirements. The risk is inconsistent standards and duplicated effort.

Hub and spoke model

A central hub sets standards, governance, and shared infrastructure. Spokes execute within business units. This model balances speed with control and works well for most mid-to-large BFSI organizations.

AI governance and compliance in BFSI

Regulatory requirements in Indian BFSI are specific and evolving. Understanding them upfront prevents costly rework.

RBI guidelines on AI and ML in lending

RBI expects algorithmic transparency, fair lending practices, and thorough model documentation for credit decisioning. Models that cannot explain their decisions face regulatory pushback.

IRDAI sandbox and insurance AI regulations

IRDAI's sandbox approach allows insurtech pilots with regulatory oversight. AI in underwriting and claims faces expectations around fairness and transparency.

Data protection under the DPDP Act

The Digital Personal Data Protection Act introduces consent requirements, data minimization principles, and cross-border transfer restrictions. These directly affect how you collect and use data for model training.

Model risk management integration

AI models fit into existing MRM frameworks, including validation cycles, challenger models, and ongoing monitoring. Treating AI models as a separate category from traditional models creates compliance gaps.

High-impact AI use cases in banking and insurance

Concrete use cases help justify the CoE investment. These are the areas where AI CoEs typically deliver measurable value.

Credit underwriting and risk scoring

AI models improve credit decisioning speed and accuracy while reducing manual review. The ROI is often clear and measurable.

Fraud detection and prevention

Real-time anomaly detection across transactions reduces fraud losses and false positives. This is one of the most mature AI applications in BFSI — Deloitte projects $80B–$160B in savings from AI-powered insurance fraud detection alone by 2032.

Claims processing automation

AI accelerates claims triage, document extraction, and settlement in insurance. The efficiency gains are significant for high-volume claims operations.

Customer service and conversational AI

Chatbots and voice assistants handle routine queries, reducing call center load. Setting realistic expectations about what conversational AI can handle today is important.

Anti-money laundering and compliance

AI improves suspicious activity detection and reduces compliance team workload. Given the regulatory stakes, this use case often gets executive attention.

How to measure AI CoE success

Most successful CoEs track metrics across four categories.

Business impact KPIs

Revenue influenced, cost savings, and customer experience improvements attributable to AI initiatives. These are the metrics that matter to the board.

Operational efficiency metrics

Time to deploy models, model reuse rate, and infrastructure utilization. These indicate whether the CoE is operating efficiently.

Model performance indicators

Accuracy, precision, recall, drift detection, and explainability scores. These are the technical metrics that indicate model health.

Adoption and scale metrics

Number of models in production, business units served, and employee AI literacy. These indicate whether the CoE is scaling beyond initial pilots.

Common pitfalls when building an AI CoE

Most failures follow predictable patterns.

  • Starting with technology, not business problems: AI CoEs fail when they chase tools instead of outcomes
  • Underinvesting in data quality: Models are only as good as the data they train on
  • Ignoring change management: Business units resist AI if they are not engaged early
  • Treating governance as an afterthought: Retrofitting compliance is expensive and risky
  • Scaling too fast before proving value: Premature scaling drains budget and credibility

The most common pattern I have seen is organizations that launch pilot projects without executive sponsorship. Those pilots rarely survive budget cycles.

Where to go from here

The most important next step is defining executive sponsorship. Without it, everything else stalls.

Start by auditing current AI initiatives. Identify one high-impact use case with clear ROI. Then secure the board mandate before building the CoE structure.

Organizations often find that legacy modernization or technical debt blocks their ability to stand up AI infrastructure. If that describes your situation, addressing those constraints first typically makes more sense than forcing AI on top of unstable foundations. Wednesday's Control service focuses on unblocking exactly those kinds of engineering challenges.

Building an AI CoE on unstable foundations is the most common reason it fails.

Legacy systems, technical debt, and fragmented data pipelines block AI infrastructure before the CoE ever gets started. The Control model at Wednesday is built to unblock exactly those engineering constraints — stabilising delivery so your AI programme has the foundation it actually needs to scale.

See how Control works

FAQs

How long does it typically take to build an AI CoE in a mid-sized bank or insurer?

Most BFSI organizations take between twelve and eighteen months to move from initial planning to a functioning AI CoE with pilot use cases in production. Timelines vary based on organizational complexity, executive alignment, and the state of existing data infrastructure.

Should BFSI organizations build an AI CoE with internal teams or partner with external specialists?

A hybrid approach often works best. Internal teams own strategy and domain expertise while external partners accelerate infrastructure setup, MLOps maturity, and initial model development. The key is ensuring knowledge transfer so internal teams can operate independently over time.

What budget considerations should executives plan for when launching an AI CoE?

Budget covers talent acquisition, cloud or hybrid infrastructure, tooling licenses, governance tooling, and change management. Most organizations underestimate ongoing operational costs relative to initial setup. Planning for sustained investment rather than a one-time project budget is important.

Overheard at Wednesday

A monthly letter from an AI native agency.

Build faster, smarter, and leaner with AI

How we think about product strategy, digital transformation, go to market, and building teams that ship. For founders, CPOs, and enterprise leaders.
From the team behind 10% of India's unicorns.