AI and Data Protection Top 5: Volume 1, Issue 1

Summary of latest trends in AI governance and data protection. This issue covers APRA, CPPA, FTC, Gen AI safety and Microsoft Copilot.

AI and Data Protection Top 5: Volume 1, Issue 1
AI and Data Protection Top 5. Volume 1 Issue 1. Source: Claritas GRC

This is an inaugural post in a new series that will cover top 5 recent trends at the intersection of AI governance and data protection. The purpose of this series is to keep technology and legal professionals up-to-date on the rapidly evolving developments in this space. Each issue will provide a summary of latest news on this topic, striking a balance between regulatory updates, industry trends, technical innovations, and legal analysis.

In this issue, we are covering how the discussion draft of American Privacy Rights Act (APRA) treats AI, CPPA's rulemaking efforts on AI risk assessments, FTC's warning to companies on quietly changing their terms for AI uses, research findings about safety of Generative AI chatbots, and practical examples of AI governance in Microsoft Co-Pilot for Security.

#1: American Privacy Rights Act (APRA) and AI

  • Without a doubt, the topic-du-jour is the surprise release of a discussion draft of a federal privacy bill, called the American Privacy Rights Act (APRA). Here's a section by section summary.
  • One of the proponents of the bill, Senator Suzan DelBene, has opined that a national data privacy law is a first step toward AI governance.
  • The bill explicitly includes AI in the scope of covered algorithms, and requires an impact assessment when they pose "consequential risk," i.e. when they pertain to certain sensitive categories of data, or may have disparate impact based on protected characteristics.
  • The treatment of AI in APRA is analogous to the tiered approach adopted in the EU AI Act. In fact, the requirement for impact assessments for "consequential risk" in the former seemingly corresponds to a similar requirement for "high risk" in the latter.
💡
The treatment of AI in APRA is analogous to the tiered approach adopted in the EU AI Act, with requirements for "consequential risk" in the former corresponding to those of "high risk" in the latter.

#2: CPPA Rulemaking on AI Risk Assessments

  • The California Privacy Protection Agency (CPPA) voted to formally draft the Automated Decisionmaking Technologies (ADMT) rules, governing how developers of ADMT, including AI, and the businesses using it could obtain and use personal data.
  • The rules establish AI governance obligations, including thresholds for risk assessments as well as notice, opt-out, and access requirements for businesses when using ADMT or AI.
  • The CPPA rules will also require risk assessments when training ADMT or AI models that will be used for "significant decisions", profiling, generating deepfakes or establishing identity.
💡
The rules establish AI governance obligations, including thresholds for risk assessments, as well as notice, opt-out, and access requirements for businesses when using ADMT or AI.

#3: FTC's Warning on Changing Terms For AI Uses

  • AI requires data to be powerful. As a result, there is an increasing conflict between leveraging personal data of users for AI enhancement and adhering to privacy commitments.
  • The FTC has warned companies to not deceptively alter their privacy policies to exploit user data more freely, emphasizing that companies cannot unilaterally change privacy terms without consent.
  • The FTC has emphasized that its stance on protecting consumer rights in digital markets remains firm, warning against unfair or deceptive practices in privacy policy changes.
💡
The FTC has emphasized that companies cannot unilaterally change privacy terms without consent to exploit user data for AI enhancement.

#4: Safety of Gen AI Chatbots: Research Findings

  • The AI chatbots of today cannot be easily tricked into violating code of conduct, based on the recently published report from the Generative Red Teaming (GRT) event held at DEF CON 2023.
  • It is relatively easy getting the chatbot to spout inaccuracies, especially when starting with a false premise. Getting the model to produce responses that violate AI governance principles to protect from algorithmic bias was less successful.
💡
Getting the model to produce responses that violate AI governance principles to protect from algorithmic bias was less successful.
  • Anthropic found that large context windows for LLMs can open them up to a new form of exploitation, called “many-shot jailbreaking,” that can be used to evade the safety guardrails. Jailbreaking occurs by including a very large number of faux dialogues preceding a final, potentially unsafe question, and causes the LLM to provide an answer to that, overriding its safety training.
Source: Anthropic

#5: AI Governance in Practice: Microsoft Co-Pilot

AI governance requirements and regulatory guidance is only as helpful as the implementations to support them in mainstream products. The new capabilities supported in the Microsoft Co-Pilot for Security is a good step forward in this direction by a major vendor.

Below is the summary of capabilities:

Source: Microsoft Security
💡
The new capabilities of comprehensive visibility, enhanced protection, and compliance controls in Microsoft Co-Pilot for Security is a major step forward for AI governance.
  • Comprehensive visibility into the usage of generative AI apps
    • Organizations can monitor sensitive information included in AI prompts.
    • Organizations can see the number of users interacting with Copilot. 
  • Extensive protection for sensitive files and blocking risky AI apps
    • Copilot conversations and responses inherit the label from reference files.
    • Organizations can use customizable policies to block apps and protect sensitive data – both in AI prompts and responses.
  • Compliance controls to detect code of conduct violations
    • Organizations have the ability to capture events and detect user interactions with Copilot.
    • Organizations can flag potential risk and business or regulatory compliance violations.

That's it for this issue. Subscribe to receive future issues directly in your inbox.

Connect with us

If you would like to reach out for guidance or provide other input, please do not hesitate to contact us.