CashNews.co
Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
The writer is a former global head of research at Morgan Stanley and former group head of research, data and analytics at UBS
It is difficult to find a wealth management operation at a bank or a broker not trying to figure out how to incorporate artificial intelligence in their offering. It is an opportunity and competitive threat.
Active wealth management tries to understand how to fit a vast array of products to changing life needs and circumstances. Providing tailored advice is expensive though. One of the biggest opportunities from AI in this area is to provide offerings to those previously excluded on cost grounds as their wealth was just not enough to justify services.
So-called robo-advisers have not been popular in doing this where human alternatives exist. Even if it is accepted that an AI-powered robo-adviser can design the best fit to an individual among, say, thousands of funds, stocks and bonds, a static proposition is not good enough. Active communication is needed between client and the engine that powers the recommendation. That is the key obstacle that AI-driven advice has to overcome.
If a client — or the adviser — mostly wants to minimise the costs of wealth advice, it is safe to assume that simple, rule-based engines will do the job. Automated advice will get gradually better in sophistication, friendly interface and cost. But the deeper problem for someone trying to build an army of robo-advisers to capture the most value-added clients lies somewhere else. In his recent book, The Atomic Human, Nigel Lawrence makes a compelling case for the difficulties that we have communicating with a computer. Machines absorb a lot of statistical information about what we own, buy or click through. They can compute the properties and past returns of each financial instrument. But they can’t access the narratives, the changes in expectations that make us who we are. As the saying goes, we know more about us than we can tell, especially to a computer.
Our capacity to invest requires many skills to act in unison. We must plan savings, postpone consumption and execute investment plans. These are highly personal traits that we struggle to explain to a financial adviser, let alone through the prompts of a typical wealth planning website offering. The default choice, then, falls into prescribing what used to work best, or the investment strategy that an adviser knows by heart, peppered with some insights from the chief investment officer. Typically, clients may end with the 60/40 equity-bond portfolio with tweaks. That hardly requires much AI insight.
Progress can be made by adapting AI to the ways in which the financial adviser works, not the other way around. AI should move beyond recommendation engines that just keep banging on the same products that similar customers tend to buy. The programs should be flexible enough to take more information from interactions with a client, making proposals intelligible to both the adviser and the investor. If a suggested portfolio can’t be explained in laymen’s terms, it won’t be sold. If it fails to provide expected returns, advisers and clients need to understand why.
Wealth management firms must be mindful that this all means a different role for central planning. A CIO and coders could build a programme flexible enough to capture most of the observations. But inevitably, as the key prompts are decentralised to the client or the adviser, the engines will generate recommendations that will deviate from the party line. This could complicate a push to sell high-margin products. There will also be different challenges in compliance and risk.
Looking through the telescope, if we can have a conversation with a programme, in human terms, about how life circumstances do change, we could be entering into a different domain. This is one of the promises of large language models or more specifically AI agents. These will have access to our experiences through a combination of dialogue and the digital breadcrumbs we left in the web. They would have enough context from us to interpret and execute what we want as life moves on. But it is hard to know how we will use these platforms; how confident will we be to give access to our innermost privacy.
Until then — if that ever happens — many clients will keep trusting people to deal with our critical retirement and wealth management issues, even if some ways of the old model may have to change. Human advisers may be increasingly assisted by AI but remain in control of the inquiry. But if Silicon Valley is right and the AI agents progress to the point that our conversation with them is fluid enough to give us comfort, we may witness a new wave of industry disruption.