topic
jurisdiction
downloadable assets
article
Sample
What Early AI Investigations Teach Us
Not everything is new when it comes to regulatory investigations into AI systems. Federal and state authorities have conducted investigations in the past, leveraging existing powers. What can we learn from these early cases?
Cynthia: First, anticipate actions from multiple agencies. AI is not governed by a single regulatory body. In the U.S., the Department of Justice (DOJ), Federal Trade Commission (FTC), Consumer Financial Protection Bureau (CFPB), and Securities and Exchange Commission (SEC) have all taken action. In Europe, enforcement has largely been driven by data protection authorities, such as Italy’s Garante and the UK’s ICO.
Second, regulators do not focus solely on companies; they also target individuals in leadership roles. For instance, when the SEC conducts investigations, its scrutiny often extends to the CEO or CFO, raising the risk of personal liability. Similarly, California’s AI impact assessments, which are expected to require individual sign-offs, could further heighten this focus on individual accountability.1
Third, regulatory scrutiny is not limited to edgy or disruptive companies. Even well-established businesses with good intentions can come under investigation.

AI Regulation Under Trump: Deregulation or a New Playbook?
Do you expect federal enforcement activity on AI to change under a Trump administration?
Cynthia: I believe the new administration’s approach to AI regulation will be nuanced.
Yes, President Trump has repealed Biden’s executive order on AI. He has also appointed free-market advocates to lead agencies such as the FTC and SEC. This points toward less regulation on AI specifically.
However, President Trump has a history of criticizing BigTech. He may push for stricter antitrust enforcement, repealing Section 230 of the Communications Decency Act (CDA),2 or enacting trade policies targeting technology companies. He has also proposed banning the use of AI for censoring speech.
Regardless of the administration, bias, consumer fraud, and deception remain central concerns for agencies like the Equal Employment Opportunity Commission (EEOC), FTC, and SEC. These priorities are unlikely to change, and they can use existing authorities to enforce them.
I doubt we will see broad-spectrum federal AI regulation similar to the EU AI Act. I do not think that will happen.
Are U.S. States going to step in to regulate on AI?
Cynthia: We will have to see. Most efforts to date have focused on specific industries (e.g., health care) or specific concerns (e.g., AI profiling).
This may change, however. We could end up with a patchwork of AI regulations across states, similar to the situation in data privacy. Companies would not find that helpful.
EU Regulation: Compliance or Confrontation for U.S. Companies?
The EU has introduced the EU AI Act, and it will seek to enforce it against U.S. companies too, which currently command the highest market shares in foundational models, AI infrastructure, as well as applications. Will U.S. companies comply, or will they resist the EU's enforcement efforts, supported by Trump’s administration?
Cynthia: It is true that the Trump administration may adopt a less conciliatory approach toward European regulators. Trump and his team could regard some investigations as protectionist. It is also true that trade policies could be leveraged to counter perceived disadvantages faced by U.S. businesses.
Yet, U.S. companies are unlikely to ignore EU regulations outright. They will weigh enforcement risks, potential fines, and reputational harm when deciding whether to comply. There are strategic and tactical business decisions to be made.
I would note that companies are increasingly fatigued by the growing burden of compliance demands, ranging from the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) to the steady stream of new digital regulations emerging from the EU.
Shirley Fodor, Editorial Committee Member: “There needs to be greater transparency and collaboration between those developing AI systems and the regulators. We are increasingly seeing 'knee-jerk' regulation, where regulators do not fully understand the nature of the risk (if there is one) or the broader impact of the regulation. A regulation never affects just one area—it creates ripple effects across multiple domains. These impacts must be properly assessed to ensure the regulation is fit for purpose, effectively addresses the underlying issue, and does not introduce new problems, such as overlaps or contradictions with existing regulations.”
Managing Global Enforcement Risks
Are there areas where regulators are particularly strict, fines are high, or management can be held personally liable? What should boards focus on?
Cynthia: Experienced boards focus on actual risks rather than hypothetical problems.
So, what are regulators likely to focus on?
First, AI leveraging health, financial, or employment data seem particularly sensitive. This is because governments have always been concerned about the misuse of such data. Where AI exacerbates the risk of misuse, you can expect regulators to intervene.
Second, biometric data—especially facial recognition and deepfake tools—will also come under scrutiny. The potential for manipulation and severe breaches of individual privacy seems significant here.
Third, AI in recruitment and employee management is a high-risk area too. Use of AI here can affect career prospects and the financial well-being of individuals.
So, the highest-priority risks arise when the use of AI systems threatens consumers’ health, personal integrity, or financial well-being. These are the areas regulators focus on.
Are there company-specific factors that signal a high risk of scrutiny?
Cynthia: Companies often fail to consider how regulators or the public perceive them. In my experience, this can lead to serious misjudgments of risks.
Boards should consider perception and ask:
- How are we perceived in the market?
- Why might regulators see us as a test case or enforcement target?
Global vs Local Solutions to AI Governance
Should companies centralise AI governance globally or adapt it regionally?
Cynthia: A global approach is appealing for multinationals. However, it is often impractical. Distinct regulatory environments make it challenging to implement a unified strategy.
A hybrid model is more effective: establish global baseline policies while supplementing them with regional frameworks tailored to local legal risks.
Does global coordination on AI governance increase the risk of liability spreading across jurisdictions?
Cynthia: I would not recommend prioritising legal considerations alone; the focus should be on managing risks effectively.
Violations in one jurisdiction often reveal broader organisational risks. Addressing these risks comprehensively should be the priority.
First, privacy breaches, bias, and societal impacts are universal concerns, even if local compliance requirements vary. Tackling an AI-related issue in one jurisdiction and learning from it reduces risks for the entire organisation. This should be the primary objective.
Second, global perception is critical. Information spreads quickly, and consistent values across jurisdictions are just as important as meeting compliance requirements. Companies need to consider how they are perceived in different markets—being a market leader in one region while seen as a disruptive maverick in another can heighten regulatory sensitivities and investigation risks.
Cross-jurisdictional and cross-functional teams can help ensure that local insights shape global strategies. Open, two-way communication is key to bridging any gaps effectively.
How to Respond to Investigations
What should companies do when facing an investigation?
Cynthia: When an investigation is imminent, the company’s technology and legal teams should immediately implement a litigation hold3 to prevent routine data deletion and preserve relevant evidence.
Proactively enforcing a litigation hold can reduce the risk of spoliation claims and enhance credibility with regulators.
In the case of AI systems, efforts should extend to historical AI models and include the secure storage of AI training data and the disabling of automated log purges.
The technology team should also provide legal counsel with access to system logs, training materials, and documentation to address regulator inquiries effectively.

What is the board’s role in responding to investigations?
Cynthia: The board should take a proactive and hands-on approach to governance during an investigation.
It should ensure that management has implemented a litigation hold to prevent the destruction of relevant records.
If there are potential conflicts of interest, the board should establish a special committee of independent directors to oversee the investigation. Engaging external counsel is critical to maintaining objectivity and independence in addressing the matter.
Directors should receive regular, transparent updates on the investigation’s progress and ensure that all mandatory disclosures and reporting obligations are met.
The board must also ensure clear communication with stakeholders while protecting legal privilege where necessary.
Ultimately, the board’s role is to maintain independent oversight, uphold its fiduciary responsibilities, and safeguard the organisation’s credibility during regulatory scrutiny.
Cynthia J. Cole is a partner and Global Chair of Commercial, Technology & Transactions at Baker McKenzie. Previously, Cynthia served as CEO and General Counsel of Spectra7 Microsystems (TSE: SEV), a leading company in VR/AR and data centre applications.
Related publications
Sources
- The California Privacy Protection Agency (CPPA) has proposed Draft Regulations requiring businesses to conduct risk assessments with clear accountability. Businesses must identify and document the names, titles, and roles of all internal and external stakeholders involved, as well as the names, positions, and signatures of those responsible (incl., the highest-ranking executive) for reviewing and approving the assessment. Other states have a similar requirement. Internal risk assessments may continue to grow in popularity, not least because they create a paper trail of accountability for regulators to review in the event of an inquiry.
- Section 230 of the Communications Decency Act (CDA) is a key U.S. law that shields online platforms from liability for content posted by their users. It allows platforms to moderate content in good faith without being treated as publishers or held legally responsible for user-generated content. Often referred to as the "backbone of the internet," it enables platforms to host diverse opinions while avoiding excessive legal risks.
- A litigation hold is an internal company directive to preserve all relevant documents, data, and information for a legal investigation or lawsuit. It suspends routine deletion or modification of potentially relevant materials, ensuring evidence remains intact.