No One Is Writing the Rules for You
The FTC just told the IAPP Global Summit it will not write AI rules for you. It will sue you after someone gets hurt, using a "reasonable" standard it will not define in advance.
Executive Summary
The FTC just told the IAPP Global Summit it will not write AI rules for you. It will sue you after someone gets hurt, using a “reasonable” standard it will not define in advance. Meanwhile, the bills from ungoverned AI are already landing: a $3.5 million healthcare fine and a $54 million manufacturer loss, both from shadow AI that security teams did not know existed. The enforcement environment is uneven enough that OkCupid handed 3 million user photos to an AI facial recognition firm and paid zero dollars in penalties. If you are waiting for a regulatory checklist before building governance, you are the case study the next enforcement action will cite.
Lead Story
The FTC Will Not Write Your AI Rulebook. Read the Cases Instead.
FTC Commissioner Mark Meador told attendees at the IAPP Global Summit 2026 that the agency will not issue prescriptive AI regulations. “We’re approaching this as enforcers who are trying to spot harm, address it, prevent it from occurring,” Meador said. The standard is “reasonable,” and it is fact-specific. There is no checklist, no safe harbor, and no advance notice of what crosses the line.
What the FTC will do is sue. Hours before Meador’s remarks, the agency settled with Match Group and OkCupid over unauthorized data sharing with a facial recognition firm. The enforcement signal is clear: companies must infer the rules from what the FTC prosecutes. For the operational manager who needs to justify a governance investment to leadership, this is the worst possible combination -- full liability exposure with no compliance playbook.
The practical consequence for mid-market IT organizations is that governance documentation is now the primary risk control. A client alert from Morgan Lewis published April 1 makes the case explicit: “Litigation outcomes often turn less on abstract technological novelty and more on what companies said, what they documented, and how they governed the technology.” Statutory damages of $1,000 to $10,000 per violation compound across jurisdictions, and courts are increasingly allowing AI governance claims to survive motions to dismiss. The paper trail you create today is the defense you will need when the case-by-case enforcement reaches your sector.
Supporting Intelligence
Shadow AI Already Has a Price Tag. Two of Them.
At RSAC 2026, Fortinet’s EVP of Marketing cited two incidents that put concrete numbers on the shadow AI problem: a healthcare organization fined $3.5 million after employees fed patient notes into ChatGPT, and a manufacturer that lost $54 million when a coding assistant leaked proprietary data. The detection gap is the core issue. According to the same presentation, the average organization takes 168 hours to discover an AI-related breach. Shadow AI spans personal chatbot accounts, unofficial SaaS AI features, and agents that inherit access to enterprise systems. Most security teams do not have visibility into any of them.
3 Million Photos, Zero Dollars: The OkCupid Settlement
OkCupid provided nearly 3 million user photos and location data to Clarifai, an AI facial recognition firm, with no contractual restrictions on use. Clarifai built a face database for identifying age, sex, and race, and its CEO planned to sell the technology to military and foreign governments, according to Ars Technica’s reporting. OkCupid denied the arrangement to media when the New York Times exposed it in 2019. The FTC settlement imposes a permanent prohibition on misrepresenting data practices but carries no financial penalty. For enterprises assessing their own third-party data-sharing risk, the enforcement signal is clear: even egregious conduct may not trigger federal financial consequences. Self-governance is not optional.
The AI Toolchain Itself Is an Attack Surface
The TeamPCP hacking group compromised LiteLLM, a Python package that routes API calls to OpenAI, Anthropic, and Google. Malicious PyPI versions 1.82.7 and 1.82.8 deployed an infostealer that harvested credentials, SSH keys, and cloud authentication secrets from hundreds of thousands of devices, according to UpGuard’s analysis. LiteLLM sits in backend infrastructure with broad API key access across multiple AI providers. The stolen credentials grant persistent access to cloud environments even after the compromised package is removed. If your engineering team uses open-source AI routing tools, verify which versions are installed and rotate every credential those systems touched.
Krebs: AI Agents Create a “Lethal Trifecta”
Brian Krebs documented the attack surface created when any system combines three capabilities: access to private data, exposure to untrusted content, and the ability to communicate externally. Hundreds of misconfigured OpenClaw instances expose API keys and OAuth tokens publicly. Prompt injection can be embedded in content that AI agents fetch -- the LLM processes it as instruction, not data. Separately, a Russian-speaking threat actor used commercial AI services to compromise more than 600 FortiGate devices across 55 countries, demonstrating how AI lowers the barrier to sophisticated attacks. Krebs’s framing is useful for leadership conversations: “The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved.” The risk language is executive-ready. Use it.
Regulatory Radar
FTC enforcement posture (active now): No prescriptive AI rules forthcoming. The agency will enforce “case-by-case” using a fact-specific reasonableness standard. Companies must read the signal from enforcement actions, not wait for a published checklist.
AI class action exposure (compounding): Morgan Lewis warns that statutory damages of $1,000-$10,000 per violation are compounding across jurisdictions as courts allow AI governance claims to proceed past motions to dismiss.
The Bottom Line
Produce a one-page AI governance documentation memo this week. List every AI tool in use, what data each tool accesses, and what policies govern each. Morgan Lewis says the documentation, not the technology, determines litigation outcomes. If the memo does not exist, you have no defense.
Audit your open-source AI dependencies for supply chain compromise. The LiteLLM attack stole credentials from hundreds of thousands of devices. Check whether your engineering team uses open-source LLM routing packages, verify installed versions, and rotate credentials on any system that ran compromised versions.
Brief your leadership on the FTC’s enforcement posture using Krebs’s “lethal trifecta” framing. AI tools with private data access, untrusted content exposure, and external communication capability are the risk category. The FTC will not tell you in advance which deployment crosses the line.

