The European Parliament's Committees adopt a provisional deal on the AI Act, diverging from the Commission's proposal. The definition of AI systems is broadened, aligning with OECD standards but narrowing the scope of high-risk systems. Amendments include new high-risk categories, banned practices, and ethical principles. Obligations are imposed on foundation model providers, and AI-value chain governance receives emphasis. Mixed responses emerge, with concerns over innovation. The Act's progress allows companies to anticipate and prepare for implementation, highlighting the need for stakeholders in high-risk systems to engage in additional work.
On Thursday 11 May, the leading parliamentary Committees of the European Parliament for the AI Act adopted their text (a provisional political deal reached by Parliament two weeks prior) by a large (84-7) majority. The plenary vote is now scheduled for mid-June. If Parliament votes in favour of the text, this will be the basis to enter the subsequent trilogue negotiations under the Spanish presidency with the Council and the Commission. A final compromise text is expected at the beginning of 2024.
The text adopted on Thursday includes some major diversions from and additions to the Commission’s proposal and the Council’s position.
The definition of ‘AI system’ is broader compared to, especially, the Council’s position. The intention is to be more aligned with the OECD, widening the potential scope of systems to be regulated. However, at the same time, that scope is also narrowed, as the text includes that systems falling within the scope of one or more of the areas or use cases in Annex III should only be considered high-risk if there is a significant risk of harm to the health, safety or fundamental rights of natural persons involved. This addition is reminiscent of the Council’s position. It is to ensure that AI systems which are not likely to cause serious fundamental rights violations or other significant risks are not captured. If providers deem there is, in fact, no significant risk, they must notify the relevant national supervisory authority or the AI Office, which will have three months to object. Misclassifications will be subject to penalties if the system is placed on the market before this deadline.
The Annex III list of stand-alone high-risk AI systems is also amended, including, e.g., recommender systems of very large (social media) online platforms as a high-risk category. There is also a significant amendment to the list of prohibited practices, including, for example, a total ban on real-time remote biometric identification in public spaces. Moreover, the text includes a best-effort obligation for all operators to develop and use any AI system following six general principles for trustworthy and ethical AI (e.g., transparency, human agency and oversight, privacy and data governance, diversity and social and environmental wellbeing), inspired by the High-Level Expert Group on AI’s Ethics Guidelines.
Additionally, the adopted text imposes extensive obligations on providers of foundation models such as GPT-4, not included in either the Commission’s or Council’s positions. These rules will complement the rules imposed on high-risk systems and include an obligation for providers to make available a sufficiently detailed summary of the use of training data protected under copyright law.
The adopted text also places more emphasis on AI value-chain governance. For example, the use of unfair contractual terms in the AI value chain imposed on SMEs or startups is prohibited, and the Commission will propose model clauses to guide contracting. A closer look at the AI value chain has also resulted in the imposition of new obligations on deployers, such as the obligation to implement appropriate technical and organisational measures and to carry out a fundamental rights impact assessment.
Finally, Parliament is proposing to increase the maximum penalties for infringement of the Artificial Intelligence Act to €40 million or 7% of the total worldwide turnover for the financial preceding year.
Impact for business
The responses have been mixed, with civil society groups seeming more optimistic than business, the latter expressing concerns that innovation will be hampered.
In any case, the adopted text indicates that the AI Act is coming together, making it easier for companies to anticipate the final text and start preparing for its implementation. It also makes clear that all parties in the value chain of (potentially) high-risk systems may have work to do, perhaps more than expected.