topic
jurisdiction
downloadable assets
article
sample
First Practical Steps
If I am put in charge of establishing an AI governance system in my company, what should be the first steps to take?
It is crucial to avoid making a mountain out of a molehill.
- AI regulation does not aim to set boundaries for all of AI. While the EU AI Act is a monumental piece of legislation, it is designed to regulate only a specific subset of AI systems.
- Intellectual property (IP) disputes have primarily centered on training large language models using allegedly copyrighted materials. However, many AI applications do not involve reproducing copyrighted content.
- Data protection and privacy are key legal and regulatory concerns, but not all AI systems process large volumes of personal data.
It is essential to quickly assess how much of the AI your company is using or planning to use falls within the scope of regulation or poses liability risks, and how important those higher-risk AI systems are to your business. If the issue is exaggerated, you risk losing trust within your organization, which would make your job significantly harder.
I recommend the following approach:
- Map out the AI activities your company is currently engaged in and those it plans to pursue.
- Identify risks, including activities classified as high-risk or prohibited categories under various regulations.
- Assess potential societal harm if these AI systems fail. For risk mapping, consider the nature of the data, especially consumer information related to children, health, or financial status. Misuse in these areas can cause severe harm and is more likely to attract regulatory scrutiny.
- Review past regulatory actions and litigation to gauge actual enforcement and liability risks.
Align Compliance Efforts with Business Priorities
You advise against surface-level compliance. Have you seen examples of this before?
In the lead-up to the GDPR1, many companies approached compliance as a crisis to be managed rather than a long-term strategic shift. The pressure to meet deadlines led to reactive implementations – checklists were completed, policies were drafted, but many organizations failed to embed privacy principles into their operational culture.
A similar pattern emerged with California’s privacy laws, CCPA2 and CPRA3, where compliance often felt like an imposed legal burden rather than a business-driven initiative. This led to a lack of clear connection and a solid understanding of the actual purpose.
Organizations that seemed to have more success took a proactive approach, integrating data protection into product design and customer trust strategies. Leveraging things like GDPR as an opportunity to differentiate their privacy stance, turning compliance into a competitive advantage rather than an operational cost.
AI governance has an opportunity to learn from these lessons. Instead of scrambling to meet regulatory deadlines, businesses should incorporate responsible AI practices before legal requirements mandate them. This means framing governance efforts around business growth, customer trust, and risk management – not just legal necessity. Weaving principles of responsible AI into the fabric of the business requires significant effort, as it will lead to new ways of operating. However, this effort is essential for lasting success. Simply ticking off checkboxes will not achieve meaningful change.
.png)
How can companies approach AI compliance differently?
Make this a business effort, grounded in your business strategy. If you are responsible for AI governance, frame compliance efforts in terms of business priorities. Why? Because if you explain how AI governance supports the company’s long-term success, business leaders are far more likely to engage.
When customers begin asking, “How are you using my data? How do you ensure that the algorithm works as expected?” it becomes clear that compliance is not just a legal issue—it is a trust issue. And anything that affects customer trust is a core business issue.
So, instead of treating AI compliance as just another regulatory requirement, position it as a strategic initiative that protects both the business and its customers. You must demonstrate business value.
Doing this can also be a competitive advantage and a differentiator. Customers, while wanting to explore the value of AI, also are asking valid questions about appropriate use of their data, and AI’s fairness, transparency and accountability. The companies that get ahead of this and can easily and clearly articulate their efforts, those will be the ones that retain customer trust and outpace competitors.
Implementation
How do you achieve this alignment in practice?
Start by understanding how a company’s existing standards for handling customer data compare with both regulatory requirements and customer expectations. Some businesses already embed strong data ethics and governance into their brand, while others take a more laissez-faire approach, assuming ownership over data without considering broader obligations.
- Companies with minimal data protection awareness may need to evolve their mindset from seeing data as a proprietary asset to understanding it as a shared responsibility with customers and regulators.
- Companies with strong internal standards may find that their approach exceeds legal requirements, providing a market advantage and a trust-building opportunity.
However, conflicts can arise when a company’s internal practices, regulatory mandates, and customer expectations do not align. In these cases:
- Identify areas of mismatch: Where does the company’s approach differ from legal obligations or customer expectations?
- Engage leadership: Shift the conversation from “compliance burden” to “business opportunity,” illustrating how responsible AI governance enhances customer loyalty and helps manage real legal risks.
- Compliance as competitive advantage: Positioning governance as a strategic asset rather than an operational necessity can help drive growth.
- Reduce operational complexity: Roll things out in phases rather than forcing abrupt shifts that disrupt operations.
For example, in SaaS companies, customer trust is critical. If customers perceive AI-powered decisions as opaque or unfair, they may switch providers—even if the company is technically compliant. As a result, some have adopted AI governance frameworks that emphasize transparency and accountability. These frameworks often include clear policies on data usage, algorithmic decision-making processes, and user consent mechanisms. By implementing these measures, these companies not only meet regulatory requirements but also foster transparency and fairness to maintain market credibility and build strong relationships with their customers.
.png)
If a business has an implicit understanding of these standards but has not documented them, how can IT, Legal, and Compliance still insert themselves into the discussion and add value?
You will usually find some reflections in marketing materials, customer Q&As, and broader brand messaging, even if they are not explicit. For instance, if a company emphasizes “security” or “reliability”, that is an entry point for further exploring what those terms mean in practice.
Start by asking strategic questions to uncover the company's priorities. For example, if healthcare data is collected or processed, work with business stakeholders to understand how customers perceive that risk. It could be as simple as asking the business: If we talked to our customers in healthcare, what would they tell us about our data standards and what is important to them?
By approaching it as a conversation, business leaders will start making connections they may not have considered before. At the same time, it helps identify cases where regulation might not be a major concern because the business would already be taking those measures anyway.
This conversation would also help determine whether the company primarily serves customers in an industry where data sensitivity is lower, allowing compliance efforts to be tailored accordingly.
I strongly advise those in charge of AI governance and compliance to deeply understand the customers and how the company wants to serve them, and then apply compliance in a way that makes sense.
Rather than imposing compliance from the top down, legal teams can guide the conversation, clarify uncertainties, and refine existing policies to align with regulatory expectations. When legal develops a deep understanding of the business and demonstrates it in conversations, the business is more likely to recognize legal's value and share information proactively.
Board Involvement & Review
What about the role of the board in seeking this alignment?
AI governance should be embedded within the board’s broader oversight of enterprise risk, ethics, and corporate strategy, like privacy, cybersecurity, and regulatory compliance. The board’s role is not to approve every AI-related decision but to ensure that executive leadership is accountable for AI risks and opportunities in alignment with business goals.
Boards often incorporate AI governance into charters, requesting regular updates (quarterly or biannually). However, it goes beyond passive oversight, meaning the board should be:
- Ensuring AI governance is embedded in Risk & Strategy: AI governance should be part of enterprise-wide risk management, with the board looking to see how management is anticipating potential risks and their corresponding mitigation strategies.
- Championing ethical AI use: Given the growing scrutiny on AI’s societal impact, boards can shape corporate culture by promoting responsible AI adoption—ensuring governance is not just a legal necessity but a business and ethical priority.
- Establishing clear accountability: Understanding who is responsible for AI risk mitigation, compliance, and decision-making across legal, security, compliance, and product teams.
- Integrating AI oversight into existing Board committees –Boards can embed AI-related discussions within audit, risk, ethics, and corporate responsibility committees to ensure alignment with broader corporate oversight.
How can AI governance remain agile?
It is unrealistic to expect AI governance to remain unchanged for more than a year; AI technology moves too fast, politics move too fast, and markets move fast.
I recommend global businesses to establish regular review cycles to ensure that your policies and processes remain optimal for the evolving environment and policy landscape and aligned with the company’s AI strategy at that time.
However, this does not mean you will face the same level of effort each time governance is updated. The biggest challenge is demonstrating how these evolving rules impact the business. But once leaders see that alignment, decision-making becomes much smoother—especially at the board or C-suite level.
Alyssa Harvey Dawson is a seasoned legal and business executive with over 25 years of experience advising companies at the intersection of technology, law, and innovation. She has served as Chief Legal Officer and General Counsel for multiple technology companies and is a board member of a technology company. Alyssa is also a member of the Editorial Committee at 20Minds.
Related publications
Sources
- The General Data Protection Regulation (GDPR) is a comprehensive EU law that governs the processing of personal data, aiming to enhance individuals' control over their information and harmonize data protection standards across member states.
- The California Consumer Privacy Act (CCPA) is a state law that grants California residents rights over their personal data, including the ability to access, delete, and opt out of the sale of their information.
- The California Privacy Rights Act (CPRA) of 2020 is a state law that amended and expanded the CCPA, introducing stricter data privacy requirements, additional consumer rights, and the establishment of the California Privacy Protection Agency (CPPA) for enforcement.