As artificial intelligence (AI) becomes increasingly integrated into business operations, travel companies are beginning to question whether they need to consider the EU AI Act. The short answer is: it depends on how your business uses AI.
Most travel companies aren’t developing AI themselves; instead, they are purchasing or commissioning AI systems for various business functions. This means the Act’s relevance hinges on your use case, and particularly whether your AI systems fall within the scope of the legislation.
Where does AI fit in travel businesses?
AI can be used in many ways across a travel company’s operations:
- Internal use: for example, AI-powered HR systems that assist with recruitment or employee management.
- Booking and inventory management: AI might be used to curate travel options, optimize inventory selection, or provide booking recommendations.
- Customer-facing tools: chatbots or virtual assistants that interact directly with customers on websites.
Each of these use cases carries different levels of regulatory attention under the EU AI Act.
What does the EU AI Act do?
The Act sets out rules for AI usage based on risk categories. It prohibits certain high-risk AI systems outright, places strict obligations on others, and treats some as minimal risk.
Travel companies are likely to be considered “deployers” of AI, meaning they implement AI systems rather than develop them. It’s important to note that the Act is an EU regulation, but it has extraterritorial reach. UK companies, for example, fall under its remit when dealing with EU customers or agents.
Categorizing AI systems by risk
The risk categories under the Act are:
- Prohibited systems: these are systems which are not permitted, examples of which include those which identify individuals in real-time (e.g. using facial recognition) in publicly accessible spaces; and systems that use subliminal techniques to distort a person’s behaviour or decision-making in a way they are not aware of. This category is very unlikely to affect travel companies directly.
- High-risk systems: these are systems which relate, for example, to employment or personnel management. If your HR system uses AI to evaluate candidates, it could be classified as high risk.
- Limited or minimal risk: these include systems such as customer-facing chatbots.
High-risk AI systems come with stringent obligations for deployers, including record-keeping, appointing responsible personnel, and maintaining compliance documentation. For minimal risk systems like chatbots, the focus is more on transparency; customers must be informed that they’re interacting with AI rather than a human.
Practical steps for travel companies
- Understand if your AI use is in scope: are you deploying AI systems that fall under the Act’s definitions? The provider of the AI solution should ideally conduct this analysis, but it’s crucial your company knows its responsibilities.
- Determine your role: are you a deployer or a provider? Deployers still have compliance obligations they need to be aware of.
- Review contracts carefully: when procuring AI systems, scrutinize contracts to ensure compliance with the Act. Some providers might be unaware of these regulations, especially those outside the EU, like many US tech suppliers.
Prepare for ongoing compliance: the Act is gradually being implemented with staggered deadlines. Non-compliance can lead to penalties similar in scale to GDPR fines.
