AI, the Regulatory Perimeter, and the Next Retail Investment Trap
The Financial Conduct Authority’s review of artificial intelligence in financial services is timely and necessary. But it also risks repeating a familiar regulatory mistake: tightening rules inside the regulatory perimeter while leaving incentives outside it untouched.
AI fundamentally lowers the cognitive barriers to investing. It cuts through complexity, personalises narratives, and gives people confidence to act. That can be a force for good: widening participation, improving understanding, and supporting better decisions.
But it also changes where risk sits.
The key problem is that AI does not have to respect the regulatory perimeter. It can just as easily describe a regulated investment product as one which is outside of regulation. The history of internet search and social media has shown how unregulated investments, unshackled by any regulatory responsibilities gain profile through the simple effect of their freedom of access and messaging which isn’t bound by the same rules. The ASA is supposed to police this space but it is hard to see how the ASA would police a technology which might not even be situated within its jurisdiction.
If AI within the perimeter becomes cautious, disclaimer-heavy and tightly constrained, while AI outside it remains fluent, confident and unconstrained, then a perverse incentive is created. Capital, attention and risk will migrate to the least regulated interfaces, not the safest investments.
This is not a hypothetical risk. The failure of London Capital and Financial sat in just such “lacuna” of regulation (to quote the scathing report by Dame Gloster of the regulatory failures which allowed it to function for so long) and AI potential widens that blind spot further.
The danger is not that regulated firms use AI badly. The danger is that they use it responsibly and lose traction with investors. Meanwhile, unregulated propositions benefit from speed, confidence and behavioural appeal, precisely because they sit outside the FCA’s perimeter.
For retail investors, particularly new and less affluent participants, the choice is rarely framed as “regulated versus unregulated”. It is framed as “accessible versus inaccessible”. If regulation makes the safe route harder to use and the risky route easier to navigate, harm increases rather than declines.
The Policy Challenge
The policy challenge, therefore, is not simply how AI is governed within regulated firms. It is how regulators avoid creating asymmetric constraints that reward the most behaviourally aggressive actors.
A regulatory framework that focuses on outcomes, behavioural impact and interface design, rather than legal wrappers alone, would be a step forward. Without that, well-intentioned AI regulation risks pushing the next generation of retail investors precisely where regulators least want them to go.