Rebecca recently joined us in 2024 as a Senior Content Writer and has experience researching and creating multimedia content. With a keen interest in current and emerging industry affairs, Rebecca responds through a critical lens and, by promoting thought and discussion, aims to increase awareness of UKGI’s work.
FCA publishes Research Note exploring the value of explaining AI outputs to consumers

The FCA has published a Research Note: ‘Credit where credit is due: How can we explain AI’s role in credit decisions for consumers?’, accompanied by an Annex, a detailed data table and a ‘data dictionary’ explaining the data items in the published data table, in relation to its AI research.
The Research Note focused on the relative efficacy of different models for explaining AI outputs to consumers in the context of using AI to determine consumer creditworthiness. It forms part of the FCA’s AI Research Series, a program of publications designed to drive forward conversations and work towards achieving safe and responsible AI use in the UK financial services sector.
Specifically, this research adds to the FCA’s exploration of the issue of AI transparency and the role of explainability models as a potential solution. AI and ML systems are often viewed as ‘black boxes’; their decision-making processes can be difficult to explain and interpret, making it challenging to effectively explain to customers how they operate and to achieve transparency.
In addition, despite their growing sophistication, AI and ML systems can make mistakes; where decisions made by these systems have financial implications, such mistakes could result in high costs and negative impacts for customers.
Explainability seeks to increase transparency, providing customers with reasons or justifications for AI decisions which allows them to comprehend the models and how they may make mistakes.
Key Findings of the Research Note
To test the relative efficacy of different models for explaining AI outputs to consumers, the research tested whether participants were able to identify errors caused either by incorrect data used by a credit scoring algorithm or by flaws in the algorithm’s decision logic itself.
The note’s experiment found that the method of explaining algorithm-assisted decisions significantly impacted the participants’ ability to judge the decisions. However, it also found the impact of the explanations tested varied, in ways that were not anticipated, depending on the type of error.
For instance, whilst simply providing an overview of the data available to the algorithm impaired participants’ ability to identify errors in data input, it aided participants in challenging errors in the algorithm’s decision logic, such as the algorithm failing to use a relevant piece of information about the consumer. Surprisingly, this helped to do so more than explanations which focused more on the decision logic itself.
The note proposes two hypotheses to explain the inconsistent effects of different explanation genres. Firstly, it suggests that presenting people with additional information may make it more difficult to spot errors simply because there is more information to review. Secondly, providing additional information about the algorithm’s decision logic may cause people to focus more on whether this decision logic was followed, rather the soundness of the decision logic itself.
The note also found that, whilst providing additional information about the inner workings of the algorithm was well received by consumers and increased their confidence in their ability to disagree with the algorithm’s decisions, providing more information was not always helpful for decision-making, and could lead to worse outcomes for consumers by impairing their ability to challenge errors.
Overall, the findings emphasised the value of testing any accompanying materials that may be provided to consumers to explain AI, ML and/or algorithmic decision-making. It also underscored the importance of testing consumers’ decision-making within the relevant context, rather than relying solely on self-reported attitudes.
Next Steps
The FCA hopes the notes and data are of interest to those who build models, and to financial firms and consumer groups in understanding the complexities and risks of building and implementing AI systems.
As noted in the 2024 FCA’s AI Update, the existing regulatory framework does not directly mention the explainability or transparency of AI systems, but high-level requirements are still applicable to the information firms provide to consumers.
For instance, the consumer understanding outcome under the Consumer Duty requires firms to ensure consumers are equipped with information that is clear, accessible, tailored to their needs, and allows them to make informed decisions. Firms are also required test and monitor the impact of communications to ensure they support good consumer outcomes.
Therefore, by offering valuable insights for firms considering utilising AI and ML explainability methods to communicate with customers and increase trust and transparency, the findings of this FCA note could aid firms in aligning implementation of AI with regulatory requirements and ensuring good consumer outcomes.