The use of artificial intelligence by professional services firms is increasing but so will the legal issues and the rise in professional indemnity claims that will inevitably follow

An examination of the root causes of professional liability claims often reveals that the same familiar errors are to blame. For example, a failure to identify the client and a lack of supervision have been in the top 10 causes of solicitors’ claims for more than 40 years. However, these issues, and the risk management tools to address them, are being stretched and challenged by the shifting sands of modern business practice and are now being judged against a backdrop of increasingly complex client and internal structures.

The main enabler or disrupter for professional firms is technology; the latest of which is the advancement of artificial intelligence (AI), one of the hottest trends in professional services. Predictions as to what computers will achieve are quickly becoming today’s reality.

Traditional software operates and makes decisions in a way that can be directly traced back to the programming, coding and design of the humans that created it.

True AI, on the other hand, is cognitive technology that is capable of processing and analysing data to teach itself optimum performance and mimic human thought processes.

The benefits of AI to professional services firms are clear. AI can be used to automate document- or data-heavy work, and is regarded as having the potential to operate as a “risk reducer”. AI may also be the future of cyber security: predictive technologies that are able to automatically detect and respond to malicious activity may assist where existing protections cannot keep pace with the changes in the cyber threats firms face.

AI is increasing at an exponential rate, as are the legal issues that inevitably follow. Some of these will be familiar, such as the need to ensure software providers preserve client confidentiality and observe data security requirements; and the risk of an error being repeated multiple times in volume work. Others are more novel.

Standard of care

Firms are well versed in the arguments that may arise as to the standard of care owed by them when they agree to do work quickly or for a fixed price, with the courts generally reluctant to accept that such factors lower the standard of care. However, if it is agreed that a task will be carried out quickly and cheaply using AI, what (if any) level of human supervision is the client entitled to expect?

The scoping, in the retainer letter, of what the firm will and will not do and avoiding mission creep thereafter will be more important than ever. Likewise, issues will arise about how professionals should satisfy themselves as to the quality of the work product and who is responsible for errors that arise from failings in the software or failures in understanding how best to use it.

It is difficult to envisage a court absolving a firm of all responsibility for the failure of a piece of software, any more than it would in the case of an error made by a junior employee or outsourced provider. That is so even where the firm is not responsible for the development of the software or where the problem has been caused by a failure to use the software properly due to inadequate training from or supervision by the software provider.

In addition to scrutinising the terms of engagement with their IT suppliers, firms will need to review their own terms to ensure that the work to be undertaken is properly scoped and that the terms include liability-limiting provisions such as proportionate liability clauses. Firms may also consider liability caps to limit risk while AI is still developing.

Incorrect outcome

Firms will also have to grapple with the risk implications of the systems they have chosen, whether bespoke or “out of the box”. Supervised machine learning systems will be limited by the quality of the supervision and while a rules-based system can be hugely effective, the more complex the system, the more likely one variable will result in anincorrect outcome, potentially many times over.

AI will change the way firms operate in the next five to 10 years. It seems inevitable there will be some streamlining and redeployment of professionals. While the automating of more “mundane” aspects of practice will free up professionals to work on more complex and intellectually stimulating tasks, risk managers and those responsible for training will have to consider how to ensure professionals still gain the necessary grounding in the basic skills of their profession and that quality and standards are maintained. We may also see firms operating as multi-disciplinary businesses, which brings its own risk management challenges.

Professional indemnity insurance will continue to adapt to the changing nature of professional firms, with premium charging based on headcount coming under review and proposal forms seeking information on how firms manage AI risks.

Overall, there is a sense the professions are at the start of a dramatic shift in how they operate. Those involved in risk and liability may face some challenges along the way if they are to stay ahead of the curve.