By Huw Jones
LONDON (Reuters) – Applying artificial intelligence (AI) to financial services must go hand-in-hand with better fraud prevention and resilience to hacking and outages, Britain’s Financial Conduct Authority (FCA) was expected to say on Wednesday.
Nikhil Rathi, chief executive of the FCA, said in remarks made available to the media in advance of a speech, that he was already seeing AI-based business models requesting authorisation.
AI’s use can benefit markets, such as cutting prices for consumers, but also cause imbalances if “unleashed unfettered”, Rathi will say.
“This means that as AI is further adopted, the investment in fraud prevention and operational and cyber resilience will have to accelerate simultaneously,” Rathi will say.
“We will take a robust line on this – full support for beneficial innovation alongside proportionate protections. We will remain super vigilant on how firms mitigate cyber-risks and fraud given the likelihood that these will rise.”
The watchdog has already observed how volatility during the trading day has doubled and amplified compared to during the 2008 global financial crisis.
“This surge in intraday short-term trading across markets and asset classes suggests investors are increasingly turning to highly automated strategies,” he will say.
The watchdog will test how its existing rules on senior managers’ accountability at firms it regulates and forthcoming tougher “consumer duty” on firms towards their customers can manage risks and develop opportunities from AI, he will say.
The FCA on Wednesday is expected to set out its thoughts on how to regulate how Big Tech intersects with financial services, with questions such as whether their troves of data could disrupt competition in the market.
(Reporting by Huw Jones; Editing by Mark Potter)