The Big Shift on AI and Due Diligence.
Posted on 14/10/25 in
Insights
Artificial intelligence is moving quickly from pilots to everyday operations in UK financial services. According to the latest FCA and Bank of England survey of 118 firms, 75% are already deploying AI, often through third-party providers, and nearly half report only a partial understanding of the technologies involved. Adoption at this scale raises new questions about oversight, accountability, and the level of due diligence required when selecting and managing AI vendors and platforms. Regulators have confirmed they will not introduce new rules at this stage, but they are watching closely how AI is governed in practice.
This research informed the discussion at the Big Shift webinar on AI and due diligence. Even without new regulation, careful evaluation of AI vendors and platforms is essential to manage risk, protect data and ensure reliable AI outputs. Existing regulations in Financial Services at a macro level govern what we do, AI technology still falls in scope of that.
The webinar, chaired by Poppy Achilles with guest speakers Mark Whitcroft and Stephen Mitchell, explored:
- How firms can strengthen oversight as AI becomes more embedded
- What effective due diligence looks like in an AI context
- The key questions to ask vendors
- How traditional due diligence methods may need to adapt
Unlike previous waves of technology, AI is not just a faster, more efficient version of existing tools. It introduces new capabilities that require a different level of scrutiny when evaluating vendors and platforms.
A live poll of attendees revealed integration with existing systems as the top concern, followed by the data used to train AI models and security considerations.
Integration with existing systems
Integration challenges are not new, but AI adds complexity. It is rarely just a matter of connecting two systems. Firms should be clear about which parts of their technology stack the AI will interact with, and how deeply. Some use cases only need basic data transfer, while others rely on workflow triggers or full end-to-end automation.
Effective vendor due diligence includes hands-on testing and verification of integration in a firm’s own environment, rather than relying on generic demonstrations. Assessment should cover how the tool performs across workflows, how many clients are using it in practice, and whether it can handle updates to core systems without disruption. Clarifying who is responsible for updates and how the vendor ensures continuity as systems evolve is equally important.
Integration should also be seen more broadly than CRMs and core platforms. Everyday tools such as email, calendars, and document storage also play a role. Evaluating how well an AI platform interacts with all relevant business tools should be part of the vendor selection from the outset. How well AI reads, writes, and updates information across workflows, and how well it adapts to system change, will determine its long-term operational value.
Data used to train AI models
Understanding the data behind an AI model helps judge how reliable it is. Vendors may use large language models (LLMs), smaller proprietary models (SLMs), or a mix of the two. Some draw on public datasets, while others fine-tune models with sector-specific or firm-specific information.
Due diligence must cover how models are customised, updated, and versioned over time, and whether new data sources or sub processors are added. It is also important to understand how vendors test for bias, accuracy, and fairness, and how the system performs across different client scenarios. These checks ensure that AI outputs remain relevant, reliable, and aligned with operational needs.
Security considerations
Security and data privacy remain fundamental. Financial services handle highly sensitive information, and many AI vendors are still young and fast-moving. It’s important to confirm how providers protect data, manage evolving models, and comply with regulatory frameworks such as ISO 27001, SOC 2, and the EU’s Digital Operational Resilience Act (DORA).
Due diligence should go beyond standard questionnaires to cover AI-specific risks that traditional frameworks may miss, including prompt injection, model inversion, and data extraction. Organisations are responsible for checking how client data is stored, masked, or encrypted, and how residual data or learned patterns are removed when a client relationship ends.
Because vendors frequently update or replace underlying models, it is also essential to confirm that changes are tested, monitored and fully documented. Understanding the vendor’s security and compliance expertise, and how often standards and processes are reviewed, helps ensure ongoing protection. These continuous security checks are a critical part of any AI vendor selection and oversight process.
AI accuracy, diversity and fairness
AI outputs are influenced by more than just the underlying models; business logic shapes whether results are practical and relevant. This is particularly important for sector-specific tasks, where general models may miss nuances. Providers often combine domain knowledge with AI to create outputs that are more useful for clients.
Due diligence needs to examine how vendors ensure accuracy, diversity, and fairness in practice. Accuracy means outputs are factually and contextually correct, customised for specific roles and consistent across client records. Diversity and fairness help ensure performance across different client scenarios and reduce bias in training data or industry patterns.
Because data such as transcripts reflects a single point in time, it’s important to understand how providers maintain accuracy over the longer term. Human review, automated testing, and regular monitoring of model performance all play a part. These practices support sustained reliability and ensure outputs remain relevant as data and use cases evolve.
A matter of expertise…
All these considerations come back to one core need: firms must have a working understanding of AI. Live polling highlighted limited in-house expertise as the main challenge (46%), followed by evaluation methods (25%), regulatory uncertainty (17%), and vendor vetting (13%).
This aligns with the Bank of England and FCA’s findings last year, which showed that while adoption is accelerating, many firms are still building their knowledge base. Developing in-house skills in AI due diligence, knowing what to ask, how to assess responses, and when to challenge vendors, is becoming as important as the technology itself. Investing in upskilling, or drawing on external expertise, is becoming essential as AI models and capabilities continue to evolve rapidly.
Looking ahead
AI differs from previous waves of technology because it introduces capabilities that go beyond speed and efficiency. Existing due diligence processes remain relevant, but they need to be adapted to account for AI’s complexity and fast-evolving nature. This includes testing AI in-house, understanding the data behind models, assessing outputs for accuracy, and confirming security and resilience.
This is not a complete overhaul of due diligence. It reflects a higher level of scrutiny and ongoing attention required when working with AI vendors and platforms. Implementing this approach requires a combination of internal expertise, clear processes, and, where necessary, external support. While the principles of careful evaluation remain the same, the methods must adapt to the complexity and ongoing evolution of AI. Due diligence for AI is not a one-off checklist; it is an ongoing, active process that matches the pace and nature of the technology itself.
The next and final Big Shift webinar is taking place on 18 November. If you’d like to attend, you can register your interest via this form.
FAQs
Q: What is AI due diligence?
A: It’s the process of evaluating AI platforms for compliance, security, and operational fit.
Q: What are the biggest risks in AI adoption?
A: Poor integration, lack of transparency, and insufficient security controls.
Q: How can firms assess AI vendor reliability?
A: By testing platforms in-house, reviewing data practices, and confirming regulatory alignment.
Q: How does Alirity help?
A: Through readiness assessments, governance frameworks, and expert-led transformation support.