AI Governance—Key Takeaways From the 2026 IAPP Global Summit
AI dominated the conversation at the IAPP Summit 2026, as it has in the last several years.
This post distills themes about AI governance from across the summit into practical guidance for businesses navigating high-stakes AI compliance questions.
Setting the Stage: The Current Regulatory Landscape
As many panels at IAPP noted, AI regulation is evolving rapidly. In 2025 alone, over 1,000 AI bills were proposed across U.S. state legislatures. As a result, a single AI system can trigger overlapping legal and governance obligations across multiple jurisdictions, making understanding applicable requirements essential for managing both legal and operational risk.
At the state level, the picture is layered. For example, multiple new AI-specific laws are now on the books in California, as seen in legislative activity from 2025 and late 2024. The Colorado AI Act is currently scheduled to take effect June 30, 2026, requiring deployer impact assessments, notices, correction rights, and human-review appeals (although proposals to amend the Colorado AI Act continue, as evidenced by the Colorado AI Policy Workgroup’s support for a revised policy framework to rework the law). Many other states, including New York and Texas, have passed AI laws. And the landscape is not limited to new AI-specific laws. Most state comprehensive privacy laws require assessments when profiling could cause foreseeable harm, whether financial, reputational, or physical. AI and other automated systems may trigger these obligations. In California, new CCPA rules took effect in January 2026, requiring risk assessments for particular processing activities and imposing automated decision-making requirements to be phased in by 2027.
At the federal level, the Federal Trade Commission continues to scrutinize ad claims about the capabilities and efficacy of AI, as reflected in recent enforcement actions. Likewise, the Department of Justice in February brought an enforcement action alleging AI-generated job ads excluded consideration of U.S. citizens, demonstrating that liability can attach when AI produces ad content. Internationally, the EU AI Act is already in effect, with implementation guidance and some waves of obligations continuing to roll out (e.g., high-risk AI system obligations will become applicable in August 2026). Globally, common enforcement expectations are emerging around documentation, substantiation, human review, vendor oversight, and change management.
The Dust Hasn't Settled, but Legal Risks Still Demand Action Now
The regulatory landscape is multi-layered and moving fast. Enforcement and litigation are increasingly focused not just on policies but on how AI systems operate in practice. In this environment, however, waiting for the dust to settle can itself be a risk. One panel emphasized that key governance practices like tracking system behavior, validating outputs, managing vendor relationships, and updating controls as systems evolve are critical steps for organizations to take now. Static compliance programs may struggle to keep pace with continuously evolving AI systems, but they may still be on the hook in the event of AI-related harms.
Adaptive Governance and Practical Guidance
A panel noted that AI systems evolve quickly, such that a system that counsel reviewed may not match the live system after a short time has passed. It may have the same name and same interface, but now exhibit different behavior, data use, and safeguards. New signals, data sources, prompts, or thresholds can shift outputs, and triage tools can quickly shape substantive outcomes. Documentation can become outdated within months, and incremental changes may go unnoticed. This panel recommended a monitoring-based governance approach: tracking model versions, benchmarking behavior, monitoring terms, and defining concrete reassessment triggers as the system evolves.
Another IAPP Summit panel underscored these challenges with particular force in the context of agentic AI. Unlike traditional AI tools that respond to discrete queries, agentic systems plan, adapt, and execute, moving from "tell me something" to "DO something," as the panelists put it. As the panelists walked through a real-world use case, they highlighted critical governance touchpoints stakeholders should be aware of that arise at every stage from proof-of-concept all the way until launch (including consent capture considerations, human-in-the-loop review for high-impact automated actions, and continuous monitoring for performance drift once the system is live).
Governance Toolkit: Map, Measure, Manage
One panel offered a practical governance toolkit organized around three functions.
- Map – Inventory all AI systems and vendors, including scoring, ranking, recommending, or generative tools. Watch for blind spots like unnamed owners or stale reviews.
- Measure – Assess risk by real-world impact. What can change for users, who bears the downside, and how easily can outcomes be challenged? Keep assessments living.
- Manage – Address vendor risk up front. Contracts should cover training restrictions, material-change notifications, prohibited uses, and audit rights. Every high-impact system needs a clear owner who can approve, escalate, or stop use.
These governance functions take on added complexity with agentic AI systems, which can autonomously execute multi-step actions across integrated systems. As some IAPP panelists highlighted, key considerations for agentic deployments include ensuring consent capture before analyzing customer interactions, maintaining human-in-the-loop oversight for high-impact actions, and monitoring for performance drift on an ongoing basis. Some speakers also pointed to established security frameworks as valuable resources for organizations building governance around agentic AI.
As a practical starting point, IAPP panelists suggested organizations pick just one live AI system and pressure-test six elements: (i) who is the owner, (ii) what risk tier is this system in, (iii) when was the last assessment, (iv) what triggers would change an assessment, (v) what are the governing rules, and (vi) what, if any, vendor dependencies are there? They should then make one tangible improvement, such as a reassessment trigger for new data or updated vendor reporting obligations. One real improvement teaches more than a perfect theoretical framework ever could.
Key Takeaways
The summit brought home that the most important aspects of AI are about real-world consequences, not product branding.
Ultimately, the reality is that systems are live, risks are real, and the regulatory landscape is moving fast. Organizations may need to start small, act decisively, and scale up, because every day of inaction can increase legal, operational, and reputational exposure.
For more information on the conference, including several noteworthy panels, please see the following companion blog posts:
Print and share
Authors
Explore more in
Topics
Perkins on Privacy
Perkins on Privacy keeps you informed about the latest developments in privacy and data security law. Our insights are provided by Perkins Coie's Privacy & Security practice, recognized by Chambers as a leading firm in the field.