Cross-border acquisitions of AI-focused targets in Germany — and across the EU — have surged over the past few years. The deals are complex in ways that standard M&A playbooks do not fully capture (yet). Below are five areas that consistently come to the fore and deserve careful attention.
1. Warranties: AI Is a Category of Its Own - also for W&I Insurances
Warranty negotiations in AI transactions look meaningfully different from standard technology deals. The areas that attract the most scrutiny — and the most robust protections for buyers — typically include:
compliance with applicable (EU) AI laws such as the EU AI Act and internal AI commitments;
ownership of and rights to relevant AI assets such as training datasets or AI agent orchestrations, including copyright clearance;
the adequacy of AI controls addressing bias, hallucinations and confidentiality risks;
the extent to which generative AI has been used in product development and what that means for IP ownership and the risk management of third-party IP infringements; and
whether LLM provider agreements ensure compliance with EU data protection regulations and potentially professional standards, e.g., that customer data is not used for model training.
What makes these warranties significant is not just their subject matter, but the weight they carry in the transaction. Buyers could insist on stronger enforcement mechanics and longer limitation periods for AI-related warranty breaches — reflecting the difficulty of identifying these risks at the due diligence stage and the potentially significant impact they can have post-closing. Getting the scope and calibration of these provisions right requires both technical understanding and transactional experience. Even for targets that are not primarily AI businesses but use AI tools in their operations, these warranty topics are increasingly finding their way into deal documentation.
A related consideration is W&I insurance. The W&I market is still navigating an exploratory phase when it comes to AI-specific warranties. Insurers are beginning to factor in a target's AI governance frameworks and deployment practices when assessing coverage scope and premiums, though the breadth and complexity of AI makes standardised underwriting approaches difficult to develop.
AI-related representations — particularly around training data rights, regulatory compliance and generative AI use — are frequently subject to enhanced underwriting scrutiny or specific exclusions, with some insurers still applying broad AI exclusions that carve out any claim connected to the use, deployment or development of AI. Which coverage is ultimately achievable will depend on the individual transaction, the quality of the due diligence process and the specific insurer. Early engagement with experienced W&I brokers and counsel is therefore an important part of the process.
2. The EU AI Act: Know Where the Target Sits on the Risk Spectrum
The EU AI Act introduces risk-based classifications, and where a target's product or operations sit on that spectrum can shape the deal structure — from representations and warranties to price and post-closing obligations. High-risk AI systems face inter alia strict conformity assessments, technical documentation obligations and human oversight requirements. Phased compliance deadlines extend into 2026 and 2027, meaning acquirers need to assess not only where the target stands today, but what it will need to do — and at what cost — going forward.
3. IP: The Core Asset — and the Core Risk
For most AI targets, the product is the IP — which makes IP due diligence one of the most important parts of the process. Who owns the models, the training data, the outputs? Have employee inventions been properly transferred (and compensated) under German law? Has open-source code created copyleft exposure? Has generative AI been used in development in a way that creates uncertainty about IP ownership? These questions are technical, fact-intensive and jurisdiction-specific — and the last two are increasingly relevant for any technology target, not just those whose core business is AI.
4. Valuation and Earn-Outs: Growth Expectations Come at a Price
AI targets are typically valued on growth potential rather than current profitability. The gap between that potential and demonstrable current performance frequently leads to substantial earn-out components — deferred purchase price tranches tied to future revenue or other performance metrics. These structures are commercially sensible, but they require careful drafting: metrics, measurement periods, conduct-of-business protections during the earn-out period and dispute resolution mechanisms all need to be clearly and unambiguously defined in the SPA. Poorly structured earn-outs are a persistent source of post-closing disputes.
5. Non-Solicitation: Critical — but Legally Complex in Germany
People are often as valuable as the technology in AI companies. Non-solicitation clauses are essential tools for protecting that value post-closing — but under German law, their enforceability is significantly limited. A common pitfall in cross-border transactions is the use of broadly drafted non-solicitation provisions in the SPA: such blanket restrictions will frequently be impermissible, and the permitted duration of these clauses is a parameter that is consistently underestimated. Separately, post-contractual non-compete obligations for employees require the payment of a statutory compensation allowance of at least 50% of the employee's last contractual remuneration for the entire restriction period. This is a mandatory requirement with direct cost implications that must be factored in from the outset.
Clauses that exceed permissible scope risk being declared void entirely. What works in a US or UK context will often not translate directly to a German deal. This challenge applies equally across technology M&A more broadly: any acquirer buying a talent-driven business in Germany needs to approach these provisions with care, early in the process.
