topic
jurisdiction
downloadable assets
article
Sample
Venture Capital (VC) investments in AI
The VC industry has experienced an unusual degree of volatility in recent years. An unprecedented spike in VC investment activity in the years 2020 and 2021 was followed by a global downturn. Such volatility was mainly due to external factors (COVID-19, rising interest rates and inflation, geopolitical uncertainties, etc.).

There are currently signs for a general recovery in VC funding. Investors have been focusing on AI driven technologies as the “next big thing”. AI start-ups, mostly in the field of generative AI, have been making headlines for securing significant amounts of funding.
European AI companies exemplifying this trend are (i) the French generative AI startup Mistral, which achieved a valuation of almost €6 billion in its latest funding round less than a year after its inception; (ii) the German AI champion Aleph Alpha, which reports having raised some US$500 million in its 2023 funding round; and (iii) UK-based Synthesia, which achieved unicorn status in its 2023 capital raise, backed by, among others, Nvidia.

What should investors look out for in the due diligence process?

VC investors evaluating AI companies must refine their due diligence process to assess AI-specific risks.
A rough distinction can be made between risks arising from the use of AI tools, which can potentially occur at any company, and risks arising from the development of AI tools, which are specific to (the increasing number of) companies developing their own AI tools.
The use-related risks include issues such as the following:
- The rights to the output generated by AI tools;
- Liability risks associated with using AI systems;
- Compliance with user obligations under the AI Act;
- The existence of any legal or technical restrictions on the intended use cases of the AI tools;
- The resilience of the target company in terms of cybersecurity and data protection compliance; and
- Any additional sector or country specific requirements.
The subject area becomes even broader in the context of due diligence for target companies developing AI tools. From a legal perspective, this raises questions in several areas such as:
- The ownership of the AI system, particularly when using open source components;
- The rights to the training data used by the AI;
- Compliance with provider obligations under the AI Act;
- Liability management within the company, including product safety compliance; and
- The existence and structure of infrastructure contracts related to the use of graphics processing units (GPUs) required for training and deploying AI models (e.g., GPU-as-a-service (GPUaaS) contracts).
Ensure close coordination between legal, commercial, regulatory, and technological due diligence teams. Whenever feasible, the different experts should share their findings with one another.
The “black box” nature of many AI tools significantly complicates the process of conducting truly insightful due diligence. So how can we address this? My advice to investors is as follows:
- First, ensure close coordination between legal, commercial, regulatory, and technological due diligence teams. Whenever feasible, the different experts should share their findings with one another.
- Second, due diligence for AI companies requires a broader perspective than the traditional “backward-looking” approach that focuses on uncovering potential risks and liabilities hidden in the target company’s past, i.e., the proverbial “skeleton in the closet”. Due diligence for AI companies should extend to imminent and longer-term regulatory developments and also assess the business’s ability to adapt to changes in societal perceptions and moral standards.
- Third, in most cases, it is unrealistic to expect due diligence to be fully “completed”—meaning that all potential risks are identified and addressed—before the investment closes. Due to the dynamic nature of bothAI-based technologies and their regulatory environment, investors, as engaged shareholders, must continuously monitor the venture’s risks and opportunities.
Contractual protections
Legal remedies such as representations and warranties (to address undetected risks) or indemnities (to cover known risks) may be insufficient and should be complemented by “forward-looking” instruments. These include a robust and comprehensive reporting system that covers not only financial KPIs but also regulatory, technological and compliance matters. Additionally, meaningful control and veto rights for investors or their representatives on the board should be established, particularly regarding material decisions related to the company’s AI strategy.
Valuations - How can investors avoid overpaying?
Startups are, by definition, young companies, whose valuations are not derived from past financial performance or incumbent market positions.
Their valuations are driven by their innovation potential of their business models, their ability to monetarise the disruption of existing market structures and the entrepreneurial acumen of the founder team.
VC investors, therefore, use valuation methods that do not primarily rely on historical financial data but instead focus on other factors, such as the size of the potential target market, peer model comparisons, and the viability of the venture’s business plan.
This approach carries the potential for significant valuation misjudgements.
For AI startups, which face high development costs and relatively long go-to-market cycles, the risk of valuation misjudgements is even higher than for other early-stage investments.
The VC toolbox contains several features to mitigate valuation risks.
- Initially, rather than providing the entire financing amount in one go, certain portions of the funding can be tied to the company reaching specific milestones. These milestones should be outlined in the investment documentation and can relate to achieving certain financial KPIs (such as turnover, EBITDA, growth rates), the completion of specific product development projects or, more fundamentally, placing a viable product on the market.
- Also, startups are traditionally wary of having newly secured funding “earmarked” for certain spending items. Startups prefer just a generic commitment in the investment documentation to use the funds for “general corporate purposes”. However, in the AI space, I regularly see financing tied to specific research projects or strategic partnerships with potential or current customers, which, in turn, help enhance the AI company's products.
- Finally, investors can safeguard against overvaluation by employing traditional downround protection mechanisms, which grant them additional shares in the company if a subsequent funding round happens at a lower valuation.
In the AI space, I regularly see financing tied to specific research projects or strategic partnerships with potential or current customers, which, in turn, help enhance the AI company's products.
Governance considerations
AI will continue to be a highly dynamic field. The corporate governance of any VC-backed AI company should be set up in a way to enable the company, its management and stakeholders to respond flexibly to upcoming challenges and changing circumstances. Incoming investors may ask for certain pre-closing or post-closing covenants and undertakings to be included in the shareholders’ agreement (SHA) or other documents relevant for the company’s corporate governance.
- As a quasi-continuation of pre-deal due diligence, a set of covenants and undertakings could be focused on improvements in the company’s compliance policies and procedures. If found to be absent or wholly insufficient, the incoming investor may request the implementation of comprehensive policies and procedures related to the further development of the company’s AI technology. If those policies and procedures are already in place, the investor may ask for specific enhancements or updates, such as new provisions implementing new compliance obligations.
- Another set of undertakings could focus on the establishment of AI-specific corporate bodies. AI Governance Boards and AI Ethics Committees can be tasked with the ongoing review of rules and regulations and the continuous critical assessment of whether the company’s products and tools comply with applicable legal requirements, industry regulations, and moral standards. Such bodies can be a meaningful component of compliance “from within”.
- Given the vast array of applications for AI tools in the areas of security, including defence and military technology, elements of the AI value chain may be sensitive from a national security point of view. This can lead to increased scrutiny by national or supranational regulators, the need to comply with investment screening procedures and limitations on the influence of incoming investors on the company’s business. These considerations can also be reflected in the company’s governance framework, e.g., by mandating comprehensive screening of potential incoming investors and providing for an ongoing investment screening and supply chain compliance.
Talent retention is key
Like most emerging technology companies, AI startups heavily rely on attracting and retaining well-qualified and motivated leadership personnel and employees.
This is not only because technological excellence and entrepreneurial vision are key to a company’s future success, but also because developing and training AI tools requires significant human input.
Additionally, investors may decide to push for an attractive employee or management incentive scheme, which benefits a larger group of employees beyond the inner leadership circle. Giving “key employees” a share in the business (by way of “real” equity or virtual structures) can be a key tool to retain them. Most European jurisdictions now offer tax-efficient and more accommodating regulatory structures for such schemes.
While I do not want to overstate the practical implications of this effect, it might lead to founders and senior management of successful and highly evaluated AI companies to emphasise non-monetary factors in their recruitment and retention efforts – the opportunity to be part of a truly innovative technological journey, a compelling mission statement or a sense of purpose when it comes to balancing what is technologically possible with moral and ethical standards and advancing the greater good of our European and global societies.
Felix Blobel is partner at Noerr and the co-head of the firm’s Private Equity and VC practice group.
Related publications
Sources
- OECD, (2024), Retrieved from: https://oecd.ai/en/vc, Last accessed 21 July 2024