Chatbots are growing more prevalent in retail or consumer-facing industries as they improve customer services and response rates and automate repetitive tasks. This editorial explores the legal issues surrounding chatbots and suggests some practical measures companies may take to mitigate such issues.

What are chatbots?

A portmanteau of 'chat' and robot' Chatbots are software programmes which are capable of engaging human users in simple conversation. Simple chatbots respond to keywords with answers that are pre-programmed into its system while smart chatbots use artificial intelligence to process the communication between the bot and the human user.

Due to their interactive nature and capability for machine learning, chatbots are mainly deployed in the customer service sector. For example, chatbots allow customers to place orders for coffee or book cab rides via voice command or text messaging. Other more innovative uses of chatbots include a companion for dementia patients, and chatbots designed to streamline medical diagnosis.

Legal issues to consider

Regulated industries/activities

Business owners should consider whether their business activity or the industry they are involved in are regulated. Providers of healthcare services and financial services need to consider issues posed by the regulatory framework within which they operate, as more stringent requirements may be applicable. For example, financial services providers may need to consider the implication of banking secrecy laws on customer information collected by chatbots.

Consumer laws

Further, as chatbots tend to provide services directly to consumers, businesses will need to understand the consumer protection and advertising laws and the regulations applicable to them. Businesses would need to be mindful that chatbots do not inadvertently breach consumer and advertising regulations.

Data protection

Chatbots process and analyse large volumes of data, which may well include personal data. As such, data protection issues are relevant. It is worth noting that the EU's GDPR gives a data subject the right not to be subject to a decision based solely on automated processing, including profiling.

Infringement of third party rights

There is the possibility that smart chatbots with system-integrated AI may infringe third party rights such as copyright and trademarks. Certain chatbots are equipped with the ability to search the internet for suitable content in order to provide solutions to its human users. However, such content may be subject to copyright or trademark protection and thus, any usage without the express permission of the rights holder may constitute an infringement.

Rogue chatbots

Further, there is the possibility for an AI-equipped chatbot to go 'rogue' and express incorrect or detrimental responses.

What you can do

Programming

To mitigate the risks identified above, chatbots should be programmed to comply with the relevant regulations, industry standards and codes of practice that are in place, and updated consistently to reflect the current position of law. A breach of these regulations may have serious legal implications to the business owner, including criminal liabilities.

Policies on chatbots

Further, businesses that deploy chatbots should have internal policies in place that govern, among others:

1. the permitted activities of the chatbot

2. the extent of information that is fed to the chatbot and the methods by which information is provided

3. data collection and processing by the chatbot

4. the maintenance of the chatbot, e.g., the frequency of updating the chatbot, reviewing the information available/collected by the chatbot etc.

5. the monitoring of chatbot activities and the possible triggers for human intervention

6. the implementation of a mechanism to manage any complaints or public concern in relation to the chatbot.

Website terms and conditions and disclaimers

It may be appropriate to include a disclaimer on the website's terms and conditions which states that the services provided by the chatbot are computer-generated and not moderated or checked and remind the user to verify all the relevant information, whether provided by the user themselves or the chatbot.

Conclusion

As chatbots and other AI-driven technologies continue to gain traction, there is an underlying concern for the potential risks and detriment arising from AI. Regulators continue to either fine-tune existing laws or implement new laws to govern AI-driven technologies. Chatbot owners should keep apace of the ever-changing legislative landscape to manage their risks.