Friday, August 22, 2025

FCA AI Sprint 2025: Why complaint handling is the test for trustworthy AI

An image of the head and shoulders of a person wrapped in purple strings of data representing AI and human decisioning.

In January 2025, the FCA brought together 115 firms, regulators, academics, and technology providers for its AI Sprint. The themes were clear: trust, explainability, governance, and safe testing. Complaint handling is where those principles meet reality, which is why it’s the proving ground for responsible AI.

What are hybrid models in complaint handling?

A hybrid model uses AI to support tasks such as triage, retrieval, and pattern detection, while humans retain final decision-making and accountability.

Practical ways this helps a complaints team:

  • Auto triage by type, urgency, or DISP timelines

  • Highlight Consumer Duty and DISP risks to indicate extra care is required

  • Surface recurring patterns and systemic themes from historic data

  • Suggest similar case outcomes to support consistency and fairness

  • Create a clear rationale trail to support SMCR accountability and FOS review

If a complaint escalates to the Financial Ombudsman Service (FOS), your people must be able to explain how a decision was reached, and why it is fair. FCA rules expect robust governance and clear accountability.

How can complaint data reveal bias and improve fairness?

Complaint data should act as a live feedback loop, spotting bias early, preventing harm, and improving both AI models and human judgement.

How to operationalise this:

  • Define measurable success criteria before any model is used

  • Monitor outcomes continuously for drift and unequal impact

  • Add human checkpoints wherever outcomes affect vulnerable customers

  • Log decisions and rationale so they can be audited and explained

  • Retrain models where evidence shows unfair outcomes

What is the supercharged sandbox?

It's an FCA programme that gives firms compute, datasets, tooling, mentorship, and regulatory engagement, so they can test AI safely.

Key dates at a glance:

  • Applications closed on 11 August 2025

  • Kick off and bootcamp on 30 September 2025

  • Testing runs through October to early January 2026

  • Demo Day in January 2026

The FCA has also announced a collaboration that provides access to NVIDIA accelerated computing and NVIDIA AI Enterprise, which helps firms experiment with advanced workloads in a controlled environment.

How is the FCA shaping global standards?

The FCA is engaging with international bodies such as IOSCO and the Financial Stability Board, so that the United Kingdom’s approach to safe AI adoption remains interoperable with global regimes. Firms should build assurance and governance now, rather than wait for a new rulebook.

How can complaint teams influence AI design?

Complaint teams do more than resolve cases. They shape how responsibly their organisations adopt AI, and the way they use these three governance verbs in every programme.

  • Define fairness standards and thresholds that automation must follow

  • Document rationale and human checkpoints for each decision path

  • Review live outcomes and retrain when evidence shows bias or harm

Why explainability must start with process, not AI

Explainability isn’t something you bolt on with AI. It begins with a process that's backed by clear rules, traceable workflows, and human-led decision points. Together, they create outcomes you can evidence under SMCR and Consumer Duty.

AI can certainly help, for example, by summarising decisions, flagging checkpoints, and mapping how cases move through a workflow. But if the underlying process lacks structure, the summary is meaningless.

That’s why firms must first show that today’s decisions are fair and explainable. Our Consumer Duty two-year review highlights what the FCA expects in digital journeys; the baseline every AI system will be judged against.

Three questions every complaint leader should ask about AI

  1. Where can AI assist without removing oversight

  2. Are we using complaint data to monitor bias and improve fairness

  3. Can we explain every decision and prove it to regulators

Why trust matters more than ever

Participants in the FCA’s AI Sprint agreed on one thing: trust is vital if financial services are to make real use of AI. Complaint handling is the perfect place to prove this trust function. With the right foundation and a streamlined process, AI can only earn trust by consistently demonstrating fairness, explainability, and accountability, all backed by data and evidence.

The Complyr platform gives teams this foundation. With clear, user-friendly workflows and fully traceable case histories, it makes collaboration simple and transparent. That builds trust in the process today and creates the ideal platform for AI tomorrow.

If you’re looking to strengthen your processes, these blogs are a good place to start:

FAQs

What did the FCA AI Sprint cover?

Trust and risk awareness, clarity on how existing rules apply, collaboration across industry, and safe innovation through sandboxes.

Why use AI in complaints if humans decide?

AI can triage, retrieve evidence, summarise notes, and spot patterns, leaving humans to make the final decisions and stay accountable for them.

How do we evidence fairness?

Define fairness criteria and ensure everyone understands what they mean. Log decision rationales, monitor outcomes for bias, and record improvements. Document everything so it can be explained to regulators; 'explainability' is key here because 'if it's not documented, it didn't happen'.

What is the FCA Supercharged Sandbox?

A safe environment where firms can test AI with access to compute, datasets, tools, and regulatory guidance.

How does the FCA expect firms to use AI responsibly?

By embedding fairness, explainability, and accountability into governance, and keeping humans in control of final decisions.

Why is complaint handling the proving ground for AI?

Because it requires fairness, traceability, and accountability for every case; the same principles regulators expect from AI systems.

Sources