Privacy Regulation Roundup

This Privacy Regulation Roundup summarizes the latest major global privacy regulatory developments, announcements, and changes. This report is updated monthly. For each relevant regulatory activity, you can find actionable Info-Tech analyst insights and links to useful Info-Tech research that can assist you with becoming compliant.

Author(s): Safayat Moahamad, John Donovan, Carlos Rivera

  • Privacy Regulation Roundup – May 2025

  • Privacy Regulation Roundup – June 2025

  • Privacy Regulation Roundup – July 2025

  • Privacy Regulation Roundup – August 2025

  • Privacy Regulation Roundup – September 2025

  • Privacy Regulation Roundup – October 2025

IAPP AIGG North America 2025: Agentic AI in Focus

Type: Article

Published: September 2025

Affected Region: All Regions

Summary: At IAPP AI Governance Global North America 2025, business leaders shared that they are increasingly focused on securing AI models, as artificial intelligence integrates into commerce. There were many discussions highlighting experiences in countering AI-generated attacks and building resilient defenses.

Use cases were also showcased, such as growing trends in malicious actors using AI to amplify cyberattacks through techniques like model extraction, data poisoning, adversarial, and backdoor attacks that exploit model vulnerabilities. Cybercriminals also employ generative AI, such as WormGPT, to create polymorphic malware that evades signature-based defenses, conduct reconnaissance by analyzing open-source data, and craft sophisticated phishing scams to enable social engineering.

To address these risks, organizations should start with a holistic understanding of their AI systems, including mapping and cataloging, followed by assessments of risks, bias, performance, and privacy impacts, then implement oversight via access controls and governance practices. Leveraging AI for defense is key, with models aiding threat detection, adversarial testing, and vulnerability patching to give defenders an edge. If building and deploying your own AI agents (Agentic AI), identity security is crucial and embedding cybersecurity and privacy practices as early as possible is honestly no longer a choice.

Extending cybersecurity posture to third-party vendors is critical, requiring deeper vetting of partner ecosystems and potentially reinventing processes for agentic AI. Regulatory frameworks, from the EU AI Act to NIST's AI Risk Management Framework, are driving integrated risk management at the board level, balancing government regulation, platform actions, and societal demands for truth verification. For agentic AI, privacy implications include ensuring transparency and explainability, adhering to purpose limitation and data minimization, bolstering security against poisoning attacks, mitigating bias for fairness, establishing oversight through governance, and building trust by embedding privacy principles from design.

Analyst Perspective: From my perspective, the convergence of AI and cyberthreats requires proactive, layered defenses rather than reactive fixes. I've seen too many firms underestimate vendor risks or skip thorough assessments and security by design. Specifically, in AI agents and securing nonhuman identities, which may lead to preventable breaches. Prioritizing regulatory alignment and AI-driven tools for defense can shift the balance, but success hinges on cultivating a culture of continuous vigilance and cross-functional collaboration to stay ahead of evolving attack vectors.

Analyst: Carlos Rivera, Principal Advisory Director – Security & Privacy

More Reading:


Fate of EU GDPR in the Era of AI

Type: Article

Published: September 2025

Affected Region: EU

Summary: Europe’s landmark data protection law, the General Data Protection Regulation (GDPR), now sits at the crossroads of compliance excellence and digital stagnation. Once celebrated as the global gold standard for privacy, it increasingly struggles to coexist with the EU’s expanding digital rulebook – from the AI Act to the Data Act – creating overlapping obligations and legal uncertainty. Policymakers are therefore pursuing selective, risk-based reforms to streamline compliance, reduce redundancy, and realign the GDPR with Europe’s broader innovation agenda.

The reform debate centers on restoring proportionality to ensure obligations scale to actual data risk rather than company size, while cutting administrative red tape. Proposals include harmonizing risk assessments across EU laws, institutionalizing regulatory sandboxes, establishing unified incident-reporting systems, and strengthening cooperation among regulators. It is argued that these targeted adjustments, rather than wholesale rewrites, can preserve fundamental rights while making the GDPR a more agile framework that supports innovation as much as it safeguards privacy.

Analyst Perspective: The proposed “selective reform” approach reflects pragmatic optimism but may be too modest for the scale of disruption AI represents. The GDPR’s friction isn’t just procedural, it is philosophical. Its legacy notions of consent, purpose limitation, and data minimization struggle to coexist with AI systems that continuously infer and repurpose data.

Risk-based proportionality is an important guidepost, but it may not resolve the deeper misalignment between law and learning systems. A truly innovation-ready GDPR would evolve beyond proportionality toward adaptive governance, where regulatory obligations flex with system behavior, model evolution, and data context.

Until Europe confronts this conceptual divide, it risks remaining stuck between overregulation and lack of adoption, safeguarding yesterday’s risks while slowing tomorrow’s breakthroughs. Europe’s challenge isn’t fine-tuning compliance but redefining relevance. The GDPR’s next chapter must bridge the gulf between rights protection and AI-driven innovation, or risk becoming a monument to the past rather than a mechanism for progress.

Analyst: John Donovan, Principal Research Director – Infrastructure and Operations

More Reading:


Diverging Paths in AI Regulation

Type: Legislation

Enacted: September 2025

Affected Region: USA

Summary: California’s Transparency in Frontier Artificial Intelligence Act (SB 53) will require AI companies to publicly disclose safety and risk-management measures for advanced (“frontier”) AI models. SB 53 follows a disclosure-based approach that garnered broader political support, arguably setting a precedent for other states and possibly shaping federal level AI governance framework. However, industry professionals have taken note on how it contrasts with the EU AI Act.

While both seek to bring greater safety, accountability, and transparency to the development and use of AI, they diverge significantly in scope and design. The EU AI Act is a comprehensive risk-based framework that aims to regulate the entire AI ecosystem. Alternatively, California’s SB 53 is a targeted safety and transparency law that applies only to the world’s largest AI developers training extremely powerful frontier models and earning over US$500 million annually.

SB 53 centers on public accountability, requiring large frontier developers to publish a “frontier AI framework” outlining safety standards and catastrophic risk mitigation strategies, such as preventing AI misuse in cyberattacks or weapons development. It also mandates reporting of “critical safety incidents” to the state’s Office of Emergency Services and extends strong whistleblower protections for employees who raise safety concerns. Meanwhile, the EU AI Act imposes broader and stricter obligations, including technical audits and risk classification. It introduces heavy penalties compared to SB 53’s US$1 million cap for noncompliance.

Both laws reflect growing global momentum toward codified AI governance. However, EU’s approach institutionalizes a detailed, harmonized regulatory regime across industries, while California’s focuses narrowly on preventing catastrophic harm from frontier AI systems through transparency and public oversight.

Analyst Perspective: The EU AI Act and California’s SB 53 reflect two distinct but complementary approaches to governing artificial intelligence. The EU’s framework is comprehensive and prescriptive, embedding oversight, documentation, and accountability across the entire AI lifecycle. It requires uniform governance across all sectors, echoing the GDPR’s regulatory philosophy. This approach ensures consistency and consumer protection but may also increase compliance costs and stifle innovation, particularly for smaller firms.

By contrast, California’s SB 53 is narrow, strategic, and transparency driven. Instead of attempting to regulate all AI use cases, it targets only the most powerful models developed by large organizations. Through public disclosure requirements, whistleblower protections, and incident reporting, it aims to make AI safety a matter of public accountability rather than bureaucratic enforcement.

The two frameworks illustrate a divergence in regulatory philosophy. The EU prioritizes risk governance through structure and standardization, while California prioritizes agility and accountability through disclosure. For global AI governance, the optimal path may lie between these two poles.

Analyst: Safayat Moahamad, Research Director – Security & Privacy

More Reading:


If you have a question or would like to receive these monthly briefings via email, submit a request here.