Applied to financial services, Isaac Asimov’s first law of robotics would perhaps go something like: A robot may not injure a human being[’s investment portfolio] or, through inaction, allow a human being[’s retirement savings] to come to harm.
Attractively simple on the surface, but you probably don’t need to have read any of Asimov’s books to realise that in practice, things are never quite so easy…
A recent report by the European Supervisory Authorities (covering banking, insurance and investment management), notes the slow growth in robo-advice across the EU - particularly among new market entrants - and suggests that it may be explained by another (less sci-fi) foundational rule: technological neutrality.
The European Commission’s stance on fintech is based on three core principles: (1) technological neutrality; (2) proportionality; and (3) market integrity. The first of these requires that regulation neither imposes nor discriminates in favour of the use of a particular type of technology. As stated in the European Supervisory Authorities’ report, no national competent authority has reported any new domestic legislation covering robo-advice.
This means that any offering of robo-advice needs to meet the standards applicable to human advisors, the key requirements for investment firms being a proper assessment of suitability and appropriateness. “Suitability” looks at the current financial situation of a client as well as their investment objectives. “Appropriateness” requires firms to look backwards and assess the pre-existing knowledge and experience of the client.
Suitability and appropriateness are not always simple to assess – it can be difficult enough for a human to assess them while sitting face-to-face with a client and while armed with data from the firm’s previous interactions with that client. Firms are expected to take into account client-specific factors which are far from concrete, including the client’s investment knowledge and experience, wishes, goals and risk tolerance.
Even for established firms, making this assessment often requires frequent interaction with the client as well as drawing upon the firm’s records of previous interactions and assessments to be able to form a more complete picture. So it is perhaps unsurprising that for new robo-advisory firms building up a fully substantiated picture from scratch, progress is slow.
These difficulties were reported by the FCA earlier in 2018, as a review of firms offering online discretionary investment management services as well as robo-advice found that many “did not properly evaluate a client’s knowledge and experience, investment objectives and capacity for loss”.
More problematically, “some firms did not ask clients about their knowledge and experience at all, as they felt their service was suitable for all individuals regardless of their investment knowledge and experience”, showing that even where the regulatory requirements applying to robots and humans are one-size-fits-all, not everyone will play by the rules.
The Report concludes that the proliferation of automated advice, often referred to as robo-advice, is still at an early stage and the phenomenon is not equally present across the insurance, banking and investment sectors, currently having a greater prominence in the investment sector. The ESAs also note that financial advice in general is already addressed in various ways through a number of EU Directives.