Both sides of the Coin: AI and its Role in the Financial Services

The FCA’s Chief Data, Information and Intelligence Officer, Jessica Rusu, recently delivered a thought-provoking speech on the benefits, risks, and governance of artificial intelligence in financial services. 

Speaking from the City and Financial Global AI Regulation Summit 2023, discussions centred around the impact of technology and how it has heavily shaped the financial habits of consumers. Now with the advancement of Artificial Intelligence, the financial services industry is “at a pivotal junction”, and will soon have to face the question: should AI be embraced, or is it something to avoid?  

The FCA are already using AI to identify fraud and people posing as financial professionals and have recently been developing tools that identify and review potential scam websites.  

When regulated properly, the use of AI can enable firms to offer better products and services. This in turn could lead to better outcomes for consumers, firms, financial markets, and the wider economy. Implementation could also potentially improve operational efficiency, boost revenue, as well as drive innovation within the insurance sector, in relation to how risk data is calculated. 

One application of AI is to process customer data or respond to customer queries in real-time through AI enabled ‘chatbots’, improving the customer experience by offering faster query resolutions compared to conventional – and sometimes slower – customer support journeys. This may include more effective matching to products and services, enhanced abilities to identify and support consumers with characteristics of vulnerability, as well as increasing financial access.  

Like two sides of a coin, AI has the potential for success, but on the other side, can also be a risk for great harm. AI relies heavily on large quantities of data, which means there is also a greater risk of error in its processing and storage, resulting in data becoming incomplete or unrepresentative if left unmanaged.  

It can create significantly more problems or amplify existing ones if not implemented correctly. For example, poor management of data may potentially leave firms open to a greater risk of cyber threats if the software is unable to detect malicious content within its systems. 

In the wrong hands, AI can be used in scams such as vishing (fraudulent phone calls) and audio deepfakes (an AI generated voice that imitates a real person’s voice). This can potentially lead to harmful targeting of consumers’ behavioural biases based on their characteristics of vulnerability, which could result in individual consumer harm and reduced trust.  

If the insurance sector wishes to expand its use of AI over the next few years, then it will need to move away from central operations and call centres so that it can catch up with the levels of service other industries are already providing to their customers within financial services. 

The FCA, Bank of England, and Prudential Regulation Authority have previously stated that they have “a close interest in the safe and responsible adoption of AI in financial services” in line with their statutory objectives. This may involve intervening further to mitigate any potential risks and harms related to AI applications, including considerations for how policies and regulations can best be supported. 

About the author

Regine joined RWA between 2021-2023 having graduated from Loughborough University with a 2:1 in Graphic Communication and Illustration. As a Digital Content Assistant, Regine used their graphic design and illustration experience to create engaging e-learning modules. 

Get UKGI Insight In Your Inbox

Regular business news and commentary delivered direct to your inbox each week. Sign up here