Tuesday, July 8, 2025
FCA AI and complaint handling: Why complaints are the real test of trustworthy AI

First published: 14 January 2025
Last updated: 16 February 2026
2026 update: The FCA’s AI Sprint insights are now being operationalised through its AI Lab programmes, including live testing and supervised experimentation environments for firms developing AI responsibly.
In January 2025, the FCA brought together 115 firms, regulators, academics, and technology providers for its AI Sprint. The themes were clear: trust, explainability, governance, and safe testing. Complaint handling is where those principles meet reality, which is why it’s the proving ground for responsible AI.
The Sprint marked a clear signal of intent from the Financial Conduct Authority, but it wasn’t the end of the conversation. Since then, the FCA has expanded its work through its AI Lab, a structured programme that gives firms practical routes to test, validate and govern AI in real-world conditions. In other words, the focus has already shifted from theory to supervised implementation.
What are hybrid models in complaint handling?
A hybrid model uses AI to support tasks such as triage, retrieval, and pattern detection, while humans retain final decision-making and accountability.
Practical ways this helps a complaints team:
Auto triage by type, urgency, or DISP timelines
Highlight Consumer Duty and DISP risks to indicate extra care is required
Surface recurring patterns and systemic themes from historic data
Suggest similar case outcomes to support consistency and fairness
Create a clear rationale trail to support SMCR accountability and FOS review
If a complaint escalates to the Financial Ombudsman Service (FOS), your people must be able to explain how a decision was reached, and why it is fair. FCA rules expect robust governance and clear accountability.
How can complaint data reveal bias and improve fairness?
Complaint data should act as a live feedback loop, spotting bias early, preventing harm, and improving both AI models and human judgement.
How to operationalise this:
Define measurable success criteria before any model is used
Monitor outcomes continuously for drift and unequal impact
Add human checkpoints wherever outcomes affect vulnerable customers
Log decisions and rationale so they can be audited and explained
Retrain models where evidence shows unfair outcomes
What’s the supercharged sandbox?
It's an FCA programme that gives firms compute, datasets, tooling, mentorship, and regulatory engagement, so they can test AI safely.
Key dates at a glance:
Applications closed on 11 August 2025
Kick off and bootcamp on 30 September 2025
Testing runs through October to early January 2026
Demo Day in January 2026
The FCA has also announced a collaboration that provides access to NVIDIA accelerated computing and NVIDIA AI Enterprise, which helps firms experiment with advanced workloads in a controlled environment.
What’s changed since the AI Sprint?
The regulatory conversation has moved on quickly. What began as collaborative exploration is now a set of practical engagement channels for firms developing AI responsibly.
Today, the FCA’s AI Lab includes live testing programmes, collaborative sandboxes, industry spotlight initiatives and structured feedback routes. These are designed to help firms move from experimentation to controlled deployment, with regulatory visibility built in from the start.
For complaint teams and operational leaders, this matters. It means trustworthy AI is no longer a future concept. It is something firms can actively test, evidence and refine today.
How is the FCA shaping global standards?
The FCA is engaging with international bodies such as IOSCO and the Financial Stability Board, so that the United Kingdom’s approach to safe AI adoption remains interoperable with global regimes. Firms should build assurance and governance now, rather than wait for a new rulebook.
How can complaint teams influence AI design?
Complaint teams do more than resolve cases. They provide the evidence regulators, boards and customers rely on to judge whether automation is fair in practice. Because complaints capture moments where expectations and outcomes diverge, they are one of the most reliable indicators of whether AI supported decisions are genuinely working.
Why explainability must start with process, not AI
AI does not prove itself in innovation labs. It proves itself in complaint cases.
Explainability isn’t something you bolt on with AI. It begins with a process that's backed by clear rules, traceable workflows, and human-led decision points. Together, they create outcomes you can evidence under SMCR and Consumer Duty.
AI can certainly help, for example, by summarising decisions, flagging checkpoints, and mapping how cases move through a workflow. But if the underlying process lacks structure, the summary is meaningless.
That’s why firms must first show that today’s decisions are fair and explainable. Our Consumer Duty two-year review highlights what the FCA expects in digital journeys; the baseline every AI system will be judged against.
Three questions every complaint leader should ask about AI
Where can AI assist without removing oversight
Are we using complaint data to monitor bias and improve fairness
Can we explain every decision and prove it to regulators
Regulators are signalling something important. They are not waiting for a single rulebook before expecting firms to act. The direction of travel is clear: organisations should be building governance, explainability and auditability into systems now so that when scrutiny increases, evidence already exists.
Why trust matters more than ever
Trust is the deciding factor in whether AI will succeed in financial services. That trust will not be earned through capability alone, but through consistent evidence that decisions are fair, explainable and accountable.
This evidence comes from process first, technology second. Firms that can demonstrate structured workflows, traceable decisions and documented rationale will always be better placed to adopt AI safely and confidently.
The Complyr platform is designed with that foundation in mind. Clear workflows, full case histories and transparent collaboration create confidence in decisions today and provide the structured environment AI needs to operate responsibly tomorrow.
The firms that treat complaint handling as their testing ground for trustworthy AI now will be the ones regulators trust first later.
In regulated industries, trust is not claimed. It is evidenced.
If you’re looking to strengthen your processes, these blogs are a good place to start:
[The Fishbone Diagram And 5 Whys To Improve Complaint Handling] – how to get to the root cause of recurring complaints
[The 5 Cs Of Complaint Handling And How To Use The Framework] – a simple framework for fair and consistent outcomes
[What Is Complaint Case Management Software] – an explainer of the tools that make workflows easier
[Build A Business Case For Complaint Management Software] – how to win support and investment from leadership
Frequently Asked Questions
What did the FCA AI Sprint cover?
The FCA AI Sprint covered trust and risk awareness, clarity on how existing rules apply, collaboration across industry, and safe innovation through sandboxes.
Why use AI in complaint handling if humans make the final decisions?
The reason to use AI in complaint handling is that AI can triage, retrieve evidence, summarise notes, and spot patterns, leaving humans to make the final decisions and stay accountable for them.
How do we evidence fairness in regulated complaint handling?
To successfully evidence fairness in regulated complaint handling, define fairness criteria and ensure everyone understands what it means. Log decision rationales, monitor outcomes for bias, and record improvements. Document everything so it can be explained to regulators; 'explainability' is key here because 'if it's not documented, it didn't happen'.
What’s the FCA Supercharged Sandbox?
The FCA’s Supercharged Sandbox is a safe environment where firms can test AI with access to compute, datasets, tools, and regulatory guidance.
How does the FCA expect firms to use AI responsibly?
The FCA expect firms to use AI responsibly by expecting them to embed fairness, explainability, and accountability into their governance and oversight, and keep humans in control of final decisions.
Why is complaint handling the proving ground for AI?
Complaint handling is the proving ground for AI because it requires fairness, traceability, and accountability for every case; the same principles regulators expect from AI systems.
