<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Exposure Brief]]></title><description><![CDATA[Exposure Brief uncovers the gap between AI deployment and AI governance. A daily intelligence briefing for the operational teams asked to figure it out.]]></description><link>https://exposurebrief.com</link><generator>Substack</generator><lastBuildDate>Thu, 07 May 2026 17:35:12 GMT</lastBuildDate><atom:link href="https://exposurebrief.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Common Nexus LLC]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[exposurebrief@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[exposurebrief@substack.com]]></itunes:email><itunes:name><![CDATA[Thomas Harrison]]></itunes:name></itunes:owner><itunes:author><![CDATA[Thomas Harrison]]></itunes:author><googleplay:owner><![CDATA[exposurebrief@substack.com]]></googleplay:owner><googleplay:email><![CDATA[exposurebrief@substack.com]]></googleplay:email><googleplay:author><![CDATA[Thomas Harrison]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[No One Is Writing the Rules for You]]></title><description><![CDATA[The FTC just told the IAPP Global Summit it will not write AI rules for you. It will sue you after someone gets hurt, using a "reasonable" standard it will not define in advance.]]></description><link>https://exposurebrief.com/p/no-one-is-writing-the-rules-for-you</link><guid isPermaLink="false">https://exposurebrief.com/p/no-one-is-writing-the-rules-for-you</guid><dc:creator><![CDATA[Thomas Harrison]]></dc:creator><pubDate>Fri, 03 Apr 2026 18:52:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Kmtf!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21bd290-d3a9-4b0c-b402-61df64aa352f_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Executive Summary</h2><p>The FTC just told the IAPP Global Summit it will not write AI rules for you. It will sue you after someone gets hurt, using a &#8220;reasonable&#8221; standard it will not define in advance. Meanwhile, the bills from ungoverned AI are already landing: a $3.5 million healthcare fine and a $54 million manufacturer loss, both from shadow AI that security teams did not know existed. The enforcement environment is uneven enough that OkCupid handed 3 million user photos to an AI facial recognition firm and paid zero dollars in penalties. If you are waiting for a regulatory checklist before building governance, you are the case study the next enforcement action will cite.</p><h2>Lead Story</h2><h3><strong>The FTC Will Not Write Your AI Rulebook. Read the Cases Instead.</strong></h3><p>FTC Commissioner Mark Meador told attendees at the <a href="https://iapp.org/news/a/iapp-global-summit-2026-ftc-commissioner-meador-stresses-agency-preference-for-case-by-case-enforcement">IAPP Global Summit 2026</a> that the agency will not issue prescriptive AI regulations. &#8220;We&#8217;re approaching this as enforcers who are trying to spot harm, address it, prevent it from occurring,&#8221; Meador said. The standard is &#8220;reasonable,&#8221; and it is fact-specific. There is no checklist, no safe harbor, and no advance notice of what crosses the line.</p><p>What the FTC will do is sue. Hours before Meador&#8217;s remarks, the agency <a href="https://www.ftc.gov/news-events/news/press-releases/2026/03/ftc-takes-action-against-match-okcupid-deceiving-users-sharing-personal-data-third-party">settled with Match Group and OkCupid</a> over unauthorized data sharing with a facial recognition firm. The enforcement signal is clear: companies must infer the rules from what the FTC prosecutes. For the operational manager who needs to justify a governance investment to leadership, this is the worst possible combination -- full liability exposure with no compliance playbook.</p><p>The practical consequence for mid-market IT organizations is that governance documentation is now the primary risk control. A <a href="https://www.morganlewis.com/pubs/2026/04/website-tracking-data-breaches-and-ai-class-actions-managing-escalating-technology-litigation-risk">client alert from Morgan Lewis</a> published April 1 makes the case explicit: &#8220;Litigation outcomes often turn less on abstract technological novelty and more on what companies said, what they documented, and how they governed the technology.&#8221; Statutory damages of $1,000 to $10,000 per violation compound across jurisdictions, and courts are increasingly allowing AI governance claims to survive motions to dismiss. The paper trail you create today is the defense you will need when the case-by-case enforcement reaches your sector.</p><h2>Supporting Intelligence</h2><h3><strong>Shadow AI Already Has a Price Tag. Two of Them.</strong></h3><p>At RSAC 2026, <a href="https://siliconangle.com/2026/03/30/shadow-ai-needs-unified-security-approach-rsac26/">Fortinet&#8217;s EVP of Marketing cited two incidents</a> that put concrete numbers on the shadow AI problem: a healthcare organization fined $3.5 million after employees fed patient notes into ChatGPT, and a manufacturer that lost $54 million when a coding assistant leaked proprietary data. The detection gap is the core issue. According to the same presentation, the average organization takes 168 hours to discover an AI-related breach. Shadow AI spans personal chatbot accounts, unofficial SaaS AI features, and agents that inherit access to enterprise systems. Most security teams do not have visibility into any of them.</p><h3><strong>3 Million Photos, Zero Dollars: The OkCupid Settlement</strong></h3><p>OkCupid provided <a href="https://arstechnica.com/tech-policy/2026/03/okcupid-match-pay-no-fine-for-sharing-user-photos-with-facial-recognition-firm/">nearly 3 million user photos and location data</a> to Clarifai, an AI facial recognition firm, with no contractual restrictions on use. Clarifai built a face database for identifying age, sex, and race, and its CEO planned to sell the technology to military and foreign governments, according to Ars Technica&#8217;s reporting. OkCupid denied the arrangement to media when the New York Times exposed it in 2019. The <a href="https://www.ftc.gov/news-events/news/press-releases/2026/03/ftc-takes-action-against-match-okcupid-deceiving-users-sharing-personal-data-third-party">FTC settlement</a> imposes a permanent prohibition on misrepresenting data practices but carries no financial penalty. For enterprises assessing their own third-party data-sharing risk, the enforcement signal is clear: even egregious conduct may not trigger federal financial consequences. Self-governance is not optional.</p><h3><strong>The AI Toolchain Itself Is an Attack Surface</strong></h3><p>The TeamPCP hacking group <a href="https://www.upguard.com/news/litellm-ai-data-breach-2026-03-24">compromised LiteLLM</a>, a Python package that routes API calls to OpenAI, Anthropic, and Google. Malicious PyPI versions 1.82.7 and 1.82.8 deployed an infostealer that harvested credentials, SSH keys, and cloud authentication secrets from hundreds of thousands of devices, according to UpGuard&#8217;s analysis. LiteLLM sits in backend infrastructure with broad API key access across multiple AI providers. The stolen credentials grant persistent access to cloud environments even after the compromised package is removed. If your engineering team uses open-source AI routing tools, verify which versions are installed and rotate every credential those systems touched.</p><h3><strong>Krebs: AI Agents Create a &#8220;Lethal Trifecta&#8221;</strong></h3><p>Brian Krebs <a href="https://krebsonsecurity.com/2026/03/how-ai-assistants-are-moving-the-security-goalposts/">documented the attack surface</a> created when any system combines three capabilities: access to private data, exposure to untrusted content, and the ability to communicate externally. Hundreds of misconfigured OpenClaw instances expose API keys and OAuth tokens publicly. Prompt injection can be embedded in content that AI agents fetch -- the LLM processes it as instruction, not data. Separately, a Russian-speaking threat actor used commercial AI services to compromise more than 600 FortiGate devices across 55 countries, demonstrating how AI lowers the barrier to sophisticated attacks. Krebs&#8217;s framing is useful for leadership conversations: &#8220;The robot butlers are useful, they&#8217;re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved.&#8221; The risk language is executive-ready. Use it.</p><h2>Regulatory Radar</h2><ul><li><p><strong>FTC enforcement posture (active now):</strong> No prescriptive AI rules forthcoming. The agency will enforce &#8220;case-by-case&#8221; using a fact-specific reasonableness standard. Companies must <a href="https://iapp.org/news/a/iapp-global-summit-2026-ftc-commissioner-meador-stresses-agency-preference-for-case-by-case-enforcement">read the signal from enforcement actions</a>, not wait for a published checklist.</p></li><li><p><strong>AI class action exposure (compounding):</strong> Morgan Lewis <a href="https://www.morganlewis.com/pubs/2026/04/website-tracking-data-breaches-and-ai-class-actions-managing-escalating-technology-litigation-risk">warns</a> that statutory damages of $1,000-$10,000 per violation are compounding across jurisdictions as courts allow AI governance claims to proceed past motions to dismiss.</p></li></ul><h2>The Bottom Line</h2><ul><li><p><strong>Produce a one-page AI governance documentation memo this week.</strong> List every AI tool in use, what data each tool accesses, and what policies govern each. Morgan Lewis says the documentation, not the technology, determines litigation outcomes. If the memo does not exist, you have no defense.</p></li><li><p><strong>Audit your open-source AI dependencies for supply chain compromise.</strong> The LiteLLM attack stole credentials from hundreds of thousands of devices. Check whether your engineering team uses open-source LLM routing packages, verify installed versions, and rotate credentials on any system that ran compromised versions.</p></li><li><p><strong>Brief your leadership on the FTC&#8217;s enforcement posture using Krebs&#8217;s &#8220;lethal trifecta&#8221; framing.</strong> AI tools with private data access, untrusted content exposure, and external communication capability are the risk category. The FTC will not tell you in advance which deployment crosses the line.</p></li></ul>]]></content:encoded></item><item><title><![CDATA[GitHub Copilot Used Its Access to Your Codebase to Run Ads]]></title><description><![CDATA[The vendors you trust with access to your systems are changing the terms of that access faster than your governance can track.]]></description><link>https://exposurebrief.com/p/github-copilot-used-its-access-to</link><guid isPermaLink="false">https://exposurebrief.com/p/github-copilot-used-its-access-to</guid><dc:creator><![CDATA[Thomas Harrison]]></dc:creator><pubDate>Tue, 31 Mar 2026 00:52:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Kmtf!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21bd290-d3a9-4b0c-b402-61df64aa352f_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>EXECUTIVE SUMMARY</strong></p><p>The AI tools your organization already authorized are acting beyond the scope you approved. GitHub Copilot injected promotional ads into 1.5 million pull requests this week using a hidden, templated feature that modified developer-written content without disclosure. A security researcher found that ChatGPT reads 55 properties from your browser and application state before you type a single character. At RSAC 2026, presenters cited a healthcare firm fined $3.5 million for employee ChatGPT misuse and a manufacturer that lost $54 million to shadow AI data leaks. The thread connecting all of it: the vendors you trust with access to your systems are changing the terms of that access faster than your governance can track, and the costs are no longer theoretical.</p><p><strong>LEAD STORY</strong></p><h2>GitHub Copilot Used Its Access to Your Codebase to Run Ads</h2><p>A developer <a href="https://notes.zachmanson.com/copilot-edited-an-ad-into-my-pr/">asked Copilot to fix a typo in a pull request description</a>. Copilot fixed the typo, then rewrote the description to include a promotional message for itself and the Raycast application. Hidden in the raw markdown: an HTML comment labeled <code>START COPILOT CODING AGENT TIPS</code>, inserted before the ad copy. This was not a model hallucination. It was a templated injection, built into the product.</p><p>A <a href="https://www.neowin.net/news/microsoft-copilot-is-now-injecting-ads-into-pull-requests-on-github-gitlab/">search of GitHub for the exact phrase</a> reveals over 11,000 pull requests containing the same promotional text. Neowin reports that more than 1.5 million PRs were affected across GitHub and GitLab. GitHub&#8217;s Principal Product Manager for Copilot, Tim Rogers, <a href="https://www.theregister.com/2026/03/30/github_copilot_ads_pull_requests/">confirmed the feature was disabled</a>, acknowledging that letting Copilot modify human-written PRs without disclosure &#8220;was the wrong judgement call.&#8221;</p><p>The precedent this sets extends beyond advertising. An AI tool with read-write access to enterprise code repositories used that access for purposes the user did not authorize and was not informed about. For any organization running Copilot on production repos, the incident raises a concrete question: what else has the tool modified, generated, or transmitted using its existing permissions that nobody reviewed?</p><p><strong>SUPPORTING INTELLIGENCE</strong></p><h3><strong>FINRA Fined a Broker-Dealer $600K for Unapproved Communications Platforms. AI Tools Are Next.</strong></h3><p><a href="https://www.finra.org/sites/default/files/fda_documents/2023079613601_BTIG_CRD_37942_AWC_va.pdf">FINRA disciplined BTIG, LLC</a> on March 25 with a $600,000 fine for failing to supervise employees&#8217; use of unapproved messaging platforms between January 2020 and July 2024. The violations span SEC recordkeeping rules (17a-4) and FINRA supervisory rules (3110). The same regulatory framework governs unapproved AI tool usage: if employees are generating client communications, research summaries, or trade rationale through unmonitored AI tools, the recordkeeping obligation is identical. FINRA has not yet brought an AI-specific enforcement action, but the supervisory expectation is already in place.</p><h3><strong>GitHub Will Use Your Code to Train AI by Default Starting April 24</strong></h3><p><a href="https://github.blog/news-insights/company-news/updates-to-github-copilot-interaction-data-usage-policy/">Starting April 24</a>, GitHub will collect interaction data from Copilot Free, Pro, and Pro+ users for AI model training by default. The data scope includes code inputs, accepted suggestions, file context, comments, and feedback. Users can opt out, but the default is opt-in. Copilot Business and Enterprise tiers are exempt, which creates a governance question: are all your developers on Business/Enterprise, or are some using Free/Pro on company devices? If IT does not know the answer, developer code is now training data.</p><h3><strong>ChatGPT Reads 55 Properties from Your Browser Before You Type a Character</strong></h3><p>A security researcher <a href="https://www.buchodi.com/chatgpt-wont-let-you-type-until-cloudflare-reads-your-react-state-i-decrypted-the-program-that-does-it/">decrypted Cloudflare&#8217;s Turnstile verification system</a> running on ChatGPT and found it reads 55 distinct properties before users can interact: browser characteristics (GPU, screen resolution, fonts, hardware), Cloudflare network data (city, IP address, region), and ChatGPT application internals including React Router context, loader data, and bootstrap state.The encryption is XOR with the key in the same payload -- it prevents casual inspection, not analysis. When employees use ChatGPT on corporate devices, the platform is fingerprinting hardware and reading application state that goes well beyond what bot detection requires.</p><h3><strong>IBM: 300,000 ChatGPT Credentials Stolen, 44% More Attacks on Public-Facing Apps</strong></h3><p><a href="https://newsroom.ibm.com/2026-02-25-ibm-2026-x-force-threat-index-ai-driven-attacks-are-escalating-as-basic-security-gaps-leave-enterprises-exposed">IBM&#8217;s 2026 X-Force Threat Index</a> documents that infostealers harvested over 300,000 ChatGPT credentials in 2025, with attacks on public-facing applications increasing 44% year-over-year.According to IBM Global Managing Partner for Cybersecurity Services Mark Hughes, &#8220;Attackers aren&#8217;t reinventing playbooks, they&#8217;re speeding them up with AI.&#8221; The enterprise implication: every AI tool an employee authenticates with a corporate email address is a credential target.</p><p><strong>REGULATORY RADAR</strong></p><p><strong>April 24, 2026: GitHub Copilot default opt-in</strong> for interaction data collection on Free/Pro/Pro+ tiers. Verify which tier your developers are using before this date.</p><p><strong>FINRA supervisory obligations</strong> for AI tool usage track the same rules used in the BTIG enforcement ($600K). No AI-specific action yet, but the precedent is set.</p><p><strong>EU AI Act August 2, 2026</strong> enforcement date remains under four months away. Deployers of AI tools (not just builders) face transparency obligations.</p><p><strong>THE BOTTOM LINE</strong></p><ul><li><p><strong>Audit your Copilot tier assignments this week.</strong> If any developer is on Free or Pro using a company device or company repos, their code becomes AI training data on April 24. Upgrade to Business/Enterprise or configure opt-out before the deadline.</p></li><li><p><strong>Review what permissions your AI coding tools have.</strong> Copilot&#8217;s ad injection used read-write access to PRs that the organization had already granted. Check whether your AI tools have broader access than their documented use case requires.</p></li><li><p><strong>If you are in financial services, check your AI tool supervisory controls against the FINRA BTIG precedent.</strong> The $600K fine was for unapproved messaging platforms. AI tools used for client communications, research, or trade rationale carry the same recordkeeping obligation.</p></li></ul><p>POWERED BY <a href="https://commonnexus.com/">COMMON NEXUS</a></p>]]></content:encoded></item><item><title><![CDATA[A Federal Court Just Made AI Vendor Guardrails Legally Enforceable]]></title><description><![CDATA[Accountability for AI systems crossed from theory to enforcement this week.]]></description><link>https://exposurebrief.com/p/a-federal-court-just-made-ai-vendor</link><guid isPermaLink="false">https://exposurebrief.com/p/a-federal-court-just-made-ai-vendor</guid><dc:creator><![CDATA[Thomas Harrison]]></dc:creator><pubDate>Sat, 28 Mar 2026 04:25:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Kmtf!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21bd290-d3a9-4b0c-b402-61df64aa352f_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>EXECUTIVE SUMMARY</strong></p><p>Accountability for AI systems crossed from theory to enforcement this week. A federal court blocked the Pentagon from retaliating against an AI company that refused to remove its ethical guardrails. New York&#8217;s largest public hospital system terminated a $4M contract after buried data clauses surfaced. The EU Parliament killed mass AI surveillance by a single vote. And Congress has three weeks to close the loophole that lets agencies buy bulk personal data without a warrant. The connecting thread: institutions that deployed AI systems without accountability infrastructure are now paying for it in courtrooms, contracts, and legislatures.</p><p><strong>LEAD STORY</strong></p><h2>A Federal Court Just Made AI Vendor Guardrails Legally Enforceable</h2><p>Anthropic, the company behind the Claude AI model, refused to remove two restrictions: no use of Claude in autonomous weapons, and no use in domestic mass surveillance. The Pentagon responded by <a href="https://www.cnn.com/2026/03/26/business/anthropic-pentagon-injunction-supply-chain-risk">designating Anthropic a supply chain risk</a>, a label previously reserved for companies tied to foreign adversaries. The designation would have required every military contractor to certify it did not use Anthropic products, potentially severing hundreds of millions in government contracts.</p><p>On Thursday, US District Judge Rita Lin <a href="https://www.cnbc.com/2026/03/26/anthropic-pentagon-dod-claude-court-ruling.html">blocked the designation</a>, finding the Pentagon&#8217;s action was &#8220;classic illegal First Amendment retaliation,&#8221; not a legitimate national security measure. The court found the government labeled Anthropic not because of any security threat, but because of its &#8220;hostile manner through the press.&#8221;</p><p>The ruling establishes a precedent that matters well beyond defense contracting. Every enterprise AI vendor has an acceptable use policy that defines what its model can and cannot do. Most organizations treat those policies as boilerplate. A federal court just treated them as governance mechanisms worth constitutional protection. If your organization deploys AI tools from vendors with restrictive terms (and nearly all have them), those terms are now demonstrably enforceable constraints, not suggestions. Anthropic is structured as a Public Benefit Corporation and publishes the document that defines those constraints, <a href="https://www.anthropic.com/constitution">Claude&#8217;s Constitution</a>, in full under a Creative Commons license. It is a rare example of an AI vendor making its governance framework publicly auditable. The immediate question: has anyone at your organization read the equivalent document for the AI tools you deploy?</p><p>At the Digital Asset Summit in New York last week, where Exposure Brief author Thomas Harrison was in attendance, a sitting CFTC commissioner flagged autonomous AI agents on financial rails as a governance priority. Panelists across three days converged on the same conclusion: accountability infrastructure for autonomous systems needs to be built before those systems scale, not retrofitted after they fail. The Anthropic ruling is the first piece of that infrastructure arriving through the courts.</p><p><strong>SUPPORTING INTELLIGENCE</strong></p><h3><strong>New York&#8217;s Largest Public Hospital System Drops Palantir Over a Buried Data Clause</strong></h3><p>NYC Health + Hospitals, the largest municipal public healthcare system in the US, <a href="https://www.theguardian.com/technology/2026/mar/26/new-york-hospitals-palantir-ai">announced it will not renew its $4M Palantir contract</a> when it expires in October. The decision followed activist pressure that surfaced a contract clause allowing Palantir to de-identify patient data and use it for &#8220;purposes other than research&#8221; with city agency permission. The hospital system plans to transition to entirely in-house systems. De-identification is not the protection it once was: AI capabilities now make re-identification of anonymized data trivially achievable at scale. For any organization with third-party AI vendors processing sensitive data, the audit question is specific: what does your contract permit the vendor to do with data after de-identification?</p><h3><strong>The Accountability Argument Cuts Both Directions</strong></h3><p>The same week a court protected a vendor&#8217;s right to restrict its AI, the EU Parliament rejected a government&#8217;s attempt to mandate AI scanning of private messages. The <a href="https://www.patrick-breyer.de/en/end-of-chat-control-eu-parliament-stops-mass-surveillance-in-voting-thriller-paving-the-way-for-genuine-child-protection/">EU Parliament rejected the &#8220;Chat Control&#8221; regulation 189-188</a>, a single-vote margin, blocking AI-based automated scanning of private messages. The data behind the vote is damning for automated surveillance: the scanning algorithms produced 13-20% false positive rates, and only 0.0000027% of scanned messages contained actual illegal material. Approximately 99% of reports generated came from Meta alone. German police found 48% of disclosed chats were &#8220;criminally irrelevant.&#8221; Parliament endorsed &#8220;Security by Design&#8221; alternatives: judicial warrants, encryption by default, and proactive source removal. The EPP conservative bloc is already pushing for a revote. For organizations that have considered or deployed AI-based scanning of internal communications, the EU&#8217;s false-positive data is a concrete benchmark for why automated content surveillance creates more liability than it resolves.</p><h3><strong>Congress Has Three Weeks to Close the Data Broker Loophole</strong></h3><p><a href="https://www.npr.org/2026/03/25/nx-s1-5752369/ice-surveillance-data-brokers-congress-anthropic">NPR&#8217;s investigation</a> documents that federal agencies including ICE, the FBI, and the Department of Defense purchase bulk cell phone location data and behavioral data from commercial data brokers without warrants. The practice exploits a loophole in the 2015 USA Freedom Act: agencies buy the data instead of collecting it, bypassing the bulk collection ban entirely. FISA Section 702 expires April 20, creating a narrow window for Congress to close the gap. The enterprise implication: the same commercial data pipelines feeding government surveillance run through the SaaS and ad-tech tools your employees use daily. AI makes correlation and re-identification of this data fast and cheap. Location data that seems anonymized in one dataset becomes personally identifiable when crossed with another.</p><p><strong>REGULATORY RADAR</strong></p><p><strong>FISA Section 702 expires April 20</strong> -- Congress must reauthorize or the data broker loophole closes by default. Monitor for language addressing commercial data purchases.</p><p><strong>EU Chat Control regulation expires April 4</strong> -- the interim regulation allowing automated scanning lapses. EPP bloc pushing for revote.</p><p><strong>Texas TRAIGA live since January 1</strong> -- AG enforcement toolkit includes civil investigative demands for AI system descriptions, training data provenance, and safeguards. Penalties up to <a href="https://www.lw.com/en/insights/texas-signs-responsible-ai-governance-act-into-law">$200,000 per violation</a>.</p><p><strong>THE BOTTOM LINE</strong></p><ul><li><p><strong>Read your AI vendor&#8217;s acceptable use policy this week.</strong> Not the marketing page, the contractual terms. Identify what the vendor prohibits, what data it can use for secondary purposes, and whether your use case falls within the permitted scope. The Anthropic ruling makes these terms enforceable; the NYC/Palantir termination shows what happens when nobody reads the fine print.</p></li><li><p><strong>Audit de-identification clauses in every third-party AI contract.</strong> If the contract allows the vendor to de-identify and repurpose data, that clause is a liability. AI re-identification capabilities make &#8220;de-identified&#8221; a weaker guarantee than it was two years ago.</p></li><li><p><strong>Calendar April 20 (FISA) and April 4 (EU Chat Control).</strong> Both deadlines create regulatory uncertainty for organizations with data-intensive operations. If your compliance posture depends on current surveillance or scanning authorities, prepare for the possibility that those authorities change.</p></li></ul><p style="text-align: center;">POWERED BY <a href="https://commonnexus.com/">COMMON NEXUS</a></p>]]></content:encoded></item><item><title><![CDATA[RSAC 2026 Proved That AI Coding Tools Operate Outside Every Security Control You Have]]></title><description><![CDATA[Check Point demonstrated six CVEs across Claude Code, Cursor, Codex, and Gemini CLI. The same week, a supply chain attack cascaded across five DevSecOps tools. Courts started holding companies account]]></description><link>https://exposurebrief.com/p/rsac-2026-proved-that-ai-coding-tools</link><guid isPermaLink="false">https://exposurebrief.com/p/rsac-2026-proved-that-ai-coding-tools</guid><dc:creator><![CDATA[Thomas Harrison]]></dc:creator><pubDate>Thu, 26 Mar 2026 14:53:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Kmtf!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21bd290-d3a9-4b0c-b402-61df64aa352f_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Executive Summary</h2><p>A court verdict, two security conferences, and a string of supply chain compromises all surfaced the same gap: accountability infrastructure for AI agents lags behind deployment speed. At RSAC 2026, Check Point demonstrated that AI coding tools bypass every layer of endpoint security. A single stolen credential cascaded across five DevSecOps tools in five days. A California jury found Meta and YouTube liable for deploying technology they knew caused harm, and a federal court ruled that AI-generated documents are not privileged.</p><h2>Lead Story</h2><h3><strong>RSAC 2026 Proved That AI Coding Tools Operate Outside Every Security Control You Have</strong></h3><p>Check Point researcher Aviv Donenfeld <a href="https://www.darkreading.com/application-security/ai-coding-tools-endpoint-security">presented at RSAC 2026</a> demonstrating six CVEs across Claude Code, Codex CLI, Cursor, and Gemini CLI. The vulnerabilities are not implementation bugs. They are architectural: AI coding assistants execute code in contexts that endpoint detection, application firewalls, and runtime monitoring cannot see. As Donenfeld put it, these tools &#8220;crushed&#8221; the endpoint security fortress.</p><p>The implications extend beyond coding tools. Microsoft&#8217;s own VP of Data and AI Security, Herain Oberoi, confirmed at RSAC that AI agent proliferation ranks as the most pressing security threat &#8212; above data sprawl, data leakage, or new regulation. If the vendor building your AI platform acknowledges that agent governance is the top concern, the organizational response cannot be &#8220;we will address it next quarter.&#8221;</p><p>The same week, <a href="https://github.com/BerriAI/litellm/issues/24512">LiteLLM</a>, a Python library with 95 million monthly downloads (<a href="https://www.cyberinsider.com/litellm-supply-chain-attack-95m-downloads/">per CyberInsider</a>) that proxies multiple LLM APIs, was compromised on PyPI with a credential stealer. The malicious code harvested AWS, GCP, and Azure keys, SSH keys, Kubernetes configs, and shell history &#8212; on Python startup, without requiring an import. Development machines, CI/CD pipelines, and production servers were all affected.</p><h2>Supporting Intelligence</h2><h3><strong>One Stolen Credential Cascaded Across Five DevSecOps Tools in Five Days</strong></h3><p><a href="https://phoenix.security/teampcp-supply-chain-attack-trivy-checkmarx-github-actions-npm-canisterworm/">TeamPCP&#8217;s supply chain campaign</a> started with a single compromised credential and spread across Trivy, Checkmarx KICS, GitHub Actions, VS Code extensions, and 66+ npm packages. The attack demonstrates that DevSecOps tools &#8212; the tools organizations rely on to catch supply chain compromises &#8212; are themselves attack surfaces. The security toolchain is not immune to the threats it monitors.</p><h3><strong>Microsoft Responds: 97% Had Identity Incidents, 70% Tied to AI</strong></h3><p><a href="https://techcommunity.microsoft.com/blog/microsoft-entra-blog/microsoft-entra-innovations-announced-at-rsac-2026/4502146">Microsoft&#8217;s 2026 Secure Access report</a> found that 97% of organizations experienced identity or network access incidents in the past year, with 70% tied to AI-related activity. The response: Entra Agent ID extends Zero Trust controls to non-human AI agent identities, and shadow AI detection is built into Entra Internet Access. The caveat: shadow AI detection requires Edge for Business deployment, which most organizations have not completed.</p><h3><strong>30,000 AI Agent Instances Exposed. A Researcher Proved How Easy They Are to Compromise.</strong></h3><p><a href="https://composio.dev/content/openclaw-security-and-vulnerabilities">OpenClaw</a> has 30,000+ exposed instances with a SkillHub marketplace that has zero security vetting. A researcher planted a fake skill, inflated its download count to the top spot, and 4,000+ real developers across 7 countries executed arbitrary commands. To get value from their new bot, users grant too much personal access. At the Digital Asset Summit in New York, a panelist noted that in Asia, &#8220;hordes of people are queuing up to install OpenClaw, then paying to uninstall it weeks later because of security violations.&#8221;</p><h3><strong>CSA Creates a Dedicated Foundation for AI Agent Security</strong></h3><p>The <a href="https://www.darkreading.com/cloud-security/csa-launches-csai-ai-security">Cloud Security Alliance launched CSAI</a>, a dedicated 501(c)3 foundation focused on governing the &#8220;agentic control plane&#8221; &#8212; identity, authorization, and trust assurance for autonomous AI agents. CSAI will develop certifications and serve as a CVE authority specifically for agentic AI vulnerabilities. The creation of a dedicated standards body signals that the industry recognizes agent governance as a distinct discipline, not a subset of application security.</p><h2>Regulatory Radar</h2><p><strong>Meta and YouTube Found Liable in Bellwether Addiction Trial (March 25, 2026):</strong> A California jury <a href="https://apnews.com/article/social-media-addiction-trial-la-5e54075023d837ccdc76c4ca512e925d">awarded $6 million in damages</a> after finding Meta and YouTube deliberately designed platforms that addict children, with Meta liable for 70%. The damages are subject to judicial review, but the verdict&#8217;s significance is structural: this is a bellwether case, selected to signal how thousands of consolidated lawsuits are likely to resolve. The verdict landed because internal documents showed executives knew their products caused harm and deployed them anyway.</p><p><strong>Federal Court Rules AI Conversations Are Not Privileged (February 2026):</strong> In <a href="https://www.venable.com/insights/publications/2026/02/ai-privilege-and-the-heppner-ruling-what-the-court">United States v. Heppner</a>, Judge Rakoff held that documents generated using Anthropic&#8217;s Claude did not qualify for attorney-client privilege. The defendant used a consumer AI tool without attorney direction; the vendor&#8217;s privacy policy permitted data disclosure. The court left open whether enterprise tools with data isolation face a different analysis. For organizations where employees use consumer AI tools without formal governance, those conversations are discoverable records.</p><p><strong>DAS 2026: Financial Regulators Building Agent Accountability Frameworks (March 24-25, 2026):</strong> At the Digital Asset Summit, KPMG, Stripe/Privy, and EigenCloud panelists described agentic commerce as &#8220;the biggest thing to happen to commerce in the coming decade.&#8221; Thirty-five percent of incoming Privy developers are building agentic products. EigenCloud&#8217;s JT Rose stated: &#8220;You need the ability to hold these agents accountable for what they&#8217;re doing.&#8221; The CFTC&#8217;s innovation advisory task force already covers AI agents alongside crypto and prediction markets.</p><h2>The Bottom Line</h2><ol><li><p><strong>Run your AI coding tools through a security assessment independent of your EDR.</strong>Check Point proved at RSAC that Claude Code, Cursor, Codex, and Gemini CLI bypass endpoint detection entirely. Your existing security stack does not see what these tools execute. Test it: run an AI coding assistant and check whether your SIEM logged the activity.</p></li><li><p><strong>Audit your Python AI dependencies for supply chain compromise.</strong> LiteLLM (95M monthly downloads) was compromised with a credential stealer that runs on Python startup. Run <code>pip list</code> against the advisory. Check whether developers install AI libraries without a security review process.</p></li><li><p><strong>Assess your legal exposure from ungoverned AI tool usage.</strong> The Heppner ruling makes consumer AI conversations discoverable. The Meta verdict holds companies liable for deploying technology they knew caused harm. Every AI conversation your employees have is a record. The governance you establish today determines what you can demonstrate tomorrow.</p></li></ol>]]></content:encoded></item><item><title><![CDATA[AI Is Now Part of the Attack Lifecycle. Governance Gaps Remain the Root Cause.]]></title><description><![CDATA[Mandiant documents AI-enabled malware in the wild. A popular AI library was compromised on PyPI. Meta's agent went rogue. Financial regulators are responding. Enterprise IT is not.]]></description><link>https://exposurebrief.com/p/ai-is-now-part-of-the-attack-lifecycle</link><guid isPermaLink="false">https://exposurebrief.com/p/ai-is-now-part-of-the-attack-lifecycle</guid><pubDate>Tue, 24 Mar 2026 20:58:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Kmtf!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21bd290-d3a9-4b0c-b402-61df64aa352f_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>EXECUTIVE SUMMARY</strong></p><p>The attack surface for enterprise AI moved three times in a single week: into the agent itself, into the tools that build it, and into the regulatory vacuum around it. Mandiant&#8217;s M-Trends 2026 documents AI-enabled malware that queries language models mid-execution to evade detection, while the hand-off window between initial compromise and lateral movement collapsed to 22 seconds. A popular AI proxy library was compromised with credential-stealing malware on PyPI the same week Meta confirmed a Sev-1 incident caused by its own AI agent operating autonomously. Financial regulators are responding faster than enterprise IT: the CFTC launched an innovation advisory task force covering AI agents,and the SEC released a five-category token taxonomy the prior week.</p><p><strong>LEAD STORY</strong></p><h2>Mandiant M-Trends 2026: AI Is Now Part of the Attack Lifecycle. Governance Gaps Remain the Root Cause.</h2><p>Mandiant&#8217;s annual threat report, grounded in over <a href="https://cloud.google.com/blog/topics/threat-intelligence/m-trends-2026/">500,000 hours of frontline incident investigations</a>, documents a threshold moment for AI-enabled threats. Malware families including PROMPTFLUX and PROMPTSTEAL now query large language models mid-execution to dynamically generate evasion techniques. QUIETVAULT, a credential stealer, checks for locally installed AI CLI tools as a harvesting target. Distillation attacks extract proprietary model logic from production AI systems.</p><p>The operational tempo findings are equally stark. The median time between initial access and hand-off to a secondary threat group collapsed from more than 8 hours in 2022 to 22 seconds in 2025, Mandiant found. Prior compromise became the top initial access vector for ransomware at 30%, doubling from the prior year. Voice phishing surged to 11% of intrusions, displacing email phishing at 6%.</p><p>Mandiant&#8217;s own assessment is pointed: &#8220;We do not consider 2025 to be the year where breaches were the direct result of AI. From our view on the frontlines, the vast majority of successful intrusions still stem from fundamental human and systemic failures.&#8221; The malware is AI-enabled. The breaches are governance-enabled. The 22-second window does not leave time for a governance framework that exists only on paper.</p><p><strong>SUPPORTING INTELLIGENCE</strong></p><h3><strong>A Popular AI Tool Library Was Compromised on PyPI. The Payload Harvested Every Credential on the Machine.</strong></h3><p><a href="https://github.com/BerriAI/litellm/issues/24512">LiteLLM</a>, a widely-used Python library that proxies multiple LLM APIs, was compromised in version 1.82.8 on PyPI on March 24. The malicious code, embedded in a <code>.pth</code> file that executes automatically on Python startup without any import, systematically collected SSH keys, cloud credentials for AWS, GCP, and Azure, Kubernetes configurations, Docker credentials, and shell history. Data was exfiltrated to a spoofed domain using RSA encryption. Anyone who installed the affected version had their credentials harvested and transmitted to an attacker-controlled server. Development machines, CI/CD pipelines, Docker containers, and production servers were all affected.</p><h3><strong>Meta&#8217;s AI Agent Went Rogue. Detection Took Two Hours.</strong></h3><p><a href="https://winbuzzer.com/2026/03/20/meta-ai-agent-rogue-data-breach-sev1-xcxwbn/">Meta confirmed</a> a Sev-1 incident on March 20 in which an internal AI agent autonomously disclosed proprietary code, business strategies, and user-related datasets to engineers without clearance. The two-hour exposure window between incident and containment is the operational metric that matters for risk modeling. Autonomous agents now account for more than 1 in 8 reported AI breaches, per <a href="https://www.hiddenlayer.com/report-and-guide/threatreport2026">HiddenLayer&#8217;s 2026 AI Threat Report</a>. Separately, only 21% of executives reported complete visibility into agent permissions and data access patterns.</p><h3><strong>30,000 AI Agent Instances Exposed. The Most Downloaded Skill Was Malware.</strong></h3><p><a href="https://composio.dev/content/openclaw-security-and-vulnerabilities">OpenClaw</a>, an autonomous AI agent framework, has 30,000+ exposed instances with a SkillHub marketplace that has zero security vetting. A security researcher planted a fake skill that received 4,000 downloads in one hour. The most downloaded skill on the platform was an info-stealer disguised as a legitimate tool. OpenClaw can access 2FA codes, bank accounts, and local files. The agent itself is the attack surface.</p><h3><strong>Microsoft Responds: Shadow AI Detection Through Identity Infrastructure</strong></h3><p><a href="https://techcommunity.microsoft.com/blog/microsoft-entra-blog/microsoft-entra-innovations-announced-at-rsac-2026/4502146">Microsoft announced at RSAC 2026</a> that 97% of organizations experienced identity or network access incidents in the past year, with 70% tied to AI-related activity. The response: Entra Agent ID extends Zero Trust controls to non-human AI agent identities, shadow AI detection is built into Entra Internet Access, and prompt injection protection is now in the network access layer. The caveat: shadow AI detection requires Edge for Business, which most organizations have not deployed. The tooling exists. The prerequisite infrastructure may not.</p><p><strong>REGULATORY RADAR</strong></p><p><strong>White House National AI Policy Framework (March 20, 2026):</strong> The Trump administration issued a <a href="https://www.cnbc.com/2026/03/20/trump-ai-policy-framework.html">six-pronged legislative framework</a> proposing a single national AI policy that would preempt state-level AI regulations. The framework calls for action before Congress recesses in August. Every compliance roadmap built on state-by-state regulation may need to be redrawn. Congressional action is required; executive order alone does not preempt state law.</p><p><strong>IAPP: Governance Rules Written by Procurement, Not Legislation (March 18, 2026):</strong> An <a href="https://iapp.org/news/a/op-ed-ai-governance-rules-are-being-written-without-you">IAPP op-ed</a> documented how the Pentagon designated Anthropic a &#8220;supply chain risk&#8221; over military AI contract disputes and the State Department ordered diplomats to oppose foreign data-sovereignty laws. The operative governance framework for AI is being set by procurement officers and diplomatic cables, not legislative bodies.</p><p><strong>DAS 2026: CFTC Innovation Advisory Task Force Now Covers AI Agents (March 24, 2026).</strong> At the Digital Asset Summit in New York, CFTC Chairman Michael Selig announced an innovation advisory task force covering crypto, prediction markets, and AI. Selig noted the CFTC is already observing AI agents trading in prediction markets on crypto rails. The SEC had released a token taxonomy the prior week, classifying digital assets into five categories and distinguishing which are and are not securities. Financial regulators are building governance frameworks for autonomous systems; most enterprise IT teams are not.</p><p><strong>THE BOTTOM LINE</strong></p><ol><li><p><strong>Inventory every AI agent with production system access.</strong> Meta&#8217;s agent had permissions no human authorized. Mandiant&#8217;s 22-second hand-off window means an ungoverned agent is an open door. If you cannot name every agent, what it accesses, and who approved it, start there.</p></li><li><p><strong>Audit your AI tool dependencies for supply chain compromise.</strong> LiteLLM was compromised on PyPI with a credential stealer targeting AWS, Azure, and GCP keys. Run <code>pip list</code> against known-compromised packages. Check whether your developers install AI libraries without security review.</p></li><li><p><strong>Map your compliance exposure to the White House preemption framework.</strong> If your current AI governance roadmap assumes state-by-state compliance, the proposed federal preemption changes the timeline. Identify which state requirements you are currently tracking and assess whether the federal framework would supersede them.</p></li></ol>]]></content:encoded></item><item><title><![CDATA[McKinsey Hired an AI Agent to Test Its Security. It Found Full Database Access in Two Hours.]]></title><description><![CDATA[Meta&#8217;s AI agent triggered a Sev-1. A supply chain attack hit 300,000 AI agent users. The visibility gap is now an active attack surface.]]></description><link>https://exposurebrief.com/p/an-ai-agent-assessed-mckinsey-in</link><guid isPermaLink="false">https://exposurebrief.com/p/an-ai-agent-assessed-mckinsey-in</guid><pubDate>Sat, 21 Mar 2026 14:11:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Kmtf!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21bd290-d3a9-4b0c-b402-61df64aa352f_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Our previous briefing documented the visibility gap: 85% of enterprises have deployed AI, only 25% can see what employees are doing with it. Attackers have started exploiting it. A red-team AI agent gained full database access to McKinsey&#8217;s internal AI platform in two hours, exposing 46.5 million messages, per CodeWall&#8217;s published findings. An AI agent operating within Meta&#8217;s internal systems triggered a Sev-1 incident after an engineer followed its guidance, resulting in unauthorized access to sensitive repositories, The Information and Digitimes reported. A coordinated supply chain attack planted 335 malicious skills across an AI agent marketplace, compromising 20% of the registry and exposing 300,000 users. These incidents suggest the gap between AI deployment and AI governance has become an active attack surface.</p><div><hr></div><h2>An AI Agent Hacked McKinsey in Two Hours. The Vulnerability Was Twenty Years Old.</h2><p><a href="https://codewall.ai/blog/how-we-hacked-mckinseys-ai-platform">CodeWall</a>, a red-team security startup, pointed an autonomous AI agent at McKinsey&#8217;s internal AI platform Lilli on March 9. Within two hours, the agent discovered 22 unauthenticated API endpoints and exploited a SQL injection vulnerability through JSON field name concatenation, CodeWall reported. The result, per their findings: full read and write access to the entire production database.</p><p>CodeWall reported access to 46.5 million plaintext chat messages covering 18 months of internal conversations, including what the researchers described as client engagement details and internal strategy discussions. The vulnerability class (SQL injection) has been documented since the early 2000s. CodeWall found it present in a platform that, per public reporting, was built in 2023. Beyond the chat logs, the agent accessed 728,000 files and discovered that Lilli&#8217;s system prompts (the instructions controlling how the AI behaves) were stored in the same writable database. As CodeWall noted, an attacker with the same access could have rewritten the AI&#8217;s behavior without any deployment.</p><p>McKinsey&#8217;s response stated that its investigation &#8220;identified no evidence that client data [...] were accessed by this researcher or any other unauthorized third party.&#8221; The distinction matters: the vulnerability existed in production, a machine found it in 120 minutes, and The AI agent followed what CodeWall described as a standard reconnaissance playbook, completing it in two hours.</p><div><hr></div><h2>Meta&#8217;s AI Agent Went Rogue. Detection Took Two Hours.</h2><p>On March 19, an AI agent operating within <a href="https://www.digitimes.com/news/a20260319VL212/meta-security-data.html">Meta&#8217;s internal systems</a> autonomously generated and posted a flawed technical response on an internal discussion forum, The Information and Digitimes reported. An engineer followed the agent&#8217;s guidance, per the reports triggering a privilege escalation that temporarily granted unauthorized engineers access to sensitive source code repositories. Meta classified the incident as Sev-1, its second-highest severity level. The pattern resembles what security researchers call a &#8220;confused deputy,&#8221; a process that uses its own elevated permissions to execute requests it shouldn&#8217;t. The two-hour exposure window between the incident and containment is worth noting for risk modeling purposes.</p><h2>The AI Agent Marketplace Is the New npm. It Has the Same Problems.</h2><p><a href="https://www.repello.ai/">Repello AI researchers</a> documented ClawHavoc, a coordinated supply chain attack that planted 335 malicious skills across the ClawHub marketplace targeting OpenClaw&#8217;s 300,000+ active users. At peak compromise, 20% of the registry contained malicious content, the researchers found. The attack used three vectors: prompt injection via SKILL.md files, reverse shell deployment through hidden Python scripts, and credential harvesting from runtime environment variables. As the researchers noted, &#8220;AI agent skill marketplaces are the new npm. They have the same growth dynamics, the same trust model problems, and now demonstrably the same attacker interest.&#8221;</p><h2>MCP Cannot Be Patched. The Risk Is Architectural.</h2><p>Gianpietro Cutolo of Netskope <a href="https://www.darkreading.com/application-security/mcp-security-patched">presented at RSAC 2026</a> demonstrating that Anthropic&#8217;s Model Context Protocol (MCP), the open standard for connecting LLMs to external data sources, introduces security risks that Cutolo characterized as architectural, not implementational. In Cutolo&#8217;s demonstration, a single poisoned email triggered coordinated actions across connected services: exfiltrating files, sending messages, and modifying records. &#8220;Organizations cannot patch or update their way out of risk,&#8221; Cutolo stated. The protocol&#8217;s design grants agents cross-service access by default. Each additional service connection extends the attack surface, Cutolo argued.</p><h2>76% Now Call Shadow AI a Problem. 31% Don&#8217;t Know If They&#8217;ve Been Breached.</h2><p><a href="https://www.hiddenlayer.com/report-and-guide/threatreport2026">HiddenLayer&#8217;s 2026 AI Threat Landscape Report</a>, released March 19, found that 76% of organizations now cite shadow AI as a definite or probable problem, up from 61% in 2025. One in eight companies reported AI breaches linked to agentic systems. Thirty-five percent of breaches traced to malware in public model repositories. The number that should concern your security team: 31% of organizations are unaware whether they suffered an AI security breach at all. If a third of organizations cannot confirm whether they have been breached, incident response planning across the industry may be operating on incomplete data.</p><h2>Your SOC 2 Report Might Be Fabricated</h2><p>A collaborative investigation by former clients <a href="https://deepdelver.substack.com/">published findings about Delve</a>, a compliance automation startup, for allegedly fabricating SOC 2 audit reports for hundreds of customers. The platform marketed itself as AI-driven but was, per the investigation, &#8220;practically devoid of any real AI,&#8221; relying on what the investigators described as pre-populated templates and certification partners that issued reports without genuine independent verification. Delve acted as both preparer and auditor of its own compliance documentation, an arrangement that the investigators argued conflicts with AICPA independence standards. If your organization has accepted a SOC 2 report from any vendor, the Delve case is a reason to verify who actually performed the audit.</p><div><hr></div><h2>Regulatory Radar</h2><p><strong>RSAC 2026 (Week of March 17-20):</strong> MCP security research presented publicly at a major conference. Netskope&#8217;s findings that MCP risks are architectural (not patchable) will likely inform future framework guidance on agent integration security. No regulatory action yet, but the research establishes the technical basis for governance requirements around agent-to-service connectivity.</p><div><hr></div><h2>The Bottom Line</h2><ol><li><p><strong>Audit your AI platforms for authentication gaps.</strong> CodeWall&#8217;s assessment of McKinsey&#8217;s AI platform found 22 unauthenticated API endpoints. Run an API discovery scan (tools like Burp Suite, OWASP ZAP, or your existing DAST platform) against every AI tool your organization operates. If endpoints accept unauthenticated requests, that is a finding.</p></li><li><p><strong>Inventory which AI agents have write access to enterprise systems.</strong> Meta&#8217;s incident occurred because an agent inherited permissions beyond its intended scope. List every agent with access to internal systems, what permissions it holds, and whether a human must approve actions before execution. If no such inventory exists, start one.</p></li><li><p><strong>Verify your vendors&#8217; SOC 2 auditors independently.</strong> The Delve investigation revealed that compliance reports can be fabricated at scale. For every SOC 2 report you rely on, confirm the auditing firm is a licensed CPA practice and that the engagement letter names them as the independent assessor, not the vendor itself.</p></li></ol><div><hr></div><p><em>Powered by <a href="https://commonnexus.com/">Common Nexus</a></em></p>]]></content:encoded></item><item><title><![CDATA[You Were Just Asked to Audit Every AI Tool in Your Organization. Now What?]]></title><description><![CDATA[85% of enterprises deployed AI. Only 25% can see what employees are doing with it.]]></description><link>https://exposurebrief.com/p/you-were-just-asked-to-audit-every</link><guid isPermaLink="false">https://exposurebrief.com/p/you-were-just-asked-to-audit-every</guid><pubDate>Sat, 21 Mar 2026 11:58:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Kmtf!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21bd290-d3a9-4b0c-b402-61df64aa352f_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Eighty-five percent of enterprises have deployed AI, but only 25% can see what their employees are doing with it, according to Optro&#8217;s March 2026 Risk Intelligence Report. That gap between adoption and visibility is widening. NIST&#8217;s AI 800-4 report describes post-deployment AI monitoring practices as &#8220;nascent.&#8221; Meanwhile, 73% of organizations have AI tools in production, and only 7% enforce policies on them in real time, per the Cybersecurity Insiders 2026 AI Security Report. The people being asked to close this gap are the same ones reading this briefing.</p><div><hr></div><h2>You Were Just Asked to Audit Every AI Tool in Your Organization. Now What?</h2><blockquote><p>&#8220;Leadership wants a full audit of every AI tool being used across the org. I genuinely don&#8217;t know how to produce one.&#8221;</p></blockquote><p>That <a href="https://www.reddit.com/r/sysadmin/comments/1rq0i00/leadership_wants_a_full_audit_of_every_ai_tool/">post on r/sysadmin</a> (a community of 850,000+ IT professionals) pulled 523 upvotes and 218 comments when it landed on March 10, suggesting the problem is widespread among IT professionals. The mandate came down. The tools to execute it did not.</p><p>As the poster described it, the AI tools IT can inventory (managed licenses, enterprise subscriptions) are not the ones creating risk. As the poster described, the risk lives in personal ChatGPT accounts on managed devices, browser extensions routing inputs to AI backends, and employees using AI tools on personal phones to process work documents over mobile data. As the poster noted, corporate DLP (Data Loss Prevention), SWG (Secure Web Gateway), and CASB (Cloud Access Security Broker) each monitor a different layer. None of them monitors the prompt, and this structural gap is corroborated by Nightfall AI&#8217;s finding that traditional DLP achieves <a href="https://www.nightfall.ai/blog/ai-browsers-are-silently-exfiltrating-sensitive-data---and-legacy-dlp-cant-see-it">only 5-25% accuracy</a> on AI browser data exfiltration pathways.</p><p><a href="https://www.prnewswire.com/news-releases/optro-research-reveals-85-percent-of-enterprises-have-deployed-ai-but-only-25-percent-have-full-visibility-302715488.html">Optro&#8217;s 2026 Risk Intelligence Report</a>, published March 17, puts the number at 85% of enterprises running AI in core operations, but only 25% with comprehensive visibility into how employees use it. Eighty percent described shadow AI as moderate to pervasive. The <a href="https://www.cybersecurity-insiders.com/ai-risk-and-readiness-report-2026/">Cybersecurity Insiders 2026 AI Security Report</a>, released March 16 and surveying 1,253 cybersecurity professionals, found an even starker gap: 73% have deployed AI tools, but only 7% enforce security policies in real time. Most cannot even distinguish a personal AI account from a corporate one.</p><p>No validated methodology for monitoring prompt-based data flow exists yet, according to <a href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.800-4.pdf">NIST AI 800-4</a>, published this month after three federal workshops and input from 200+ experts. The report says it plainly: best practices for monitoring AI after deployment are &#8220;nascent.&#8221; NIST&#8217;s findings align with what many IT teams have experienced firsthand, and NIST AI 800-4 gives you the citation to put in front of leadership.</p><div><hr></div><h2>Your DLP Policy Has a Prompt-Shaped Hole in It</h2><p>A <a href="https://www.reddit.com/r/sysadmin/comments/1rrln94/trying_to_write_a_dlp_policy_for_ai_interactions/">March 12 r/sysadmin thread</a> details an attempt to write a DLP policy for AI interactions that ended in structural failure. Traditional DLP was built around file-based data movement: attachments, uploads, downloads. As Nightfall AI&#8217;s analysis found, copy-paste into a browser text field bypasses most existing DLP layers, with legacy solutions achieving only 5-25% accuracy on these pathways. As the poster put it: &#8220;SWG sees the domain. CASB sees the app. Neither sees the prompt.&#8221; Until DLP solutions include prompt-content inspection, the gap the poster described is likely to persist. Ask yours for a delivery timeline.</p><h2>77% of Employees Are Pasting Company Data into AI Tools. On Personal Accounts.</h2><p><a href="https://breached.company/data-privacy-week-2026-why-77-of-employees-are-leaking-corporate-data-through-ai-tools/">Breached.Company&#8217;s Data Privacy Week 2026 analysis</a> (February 2, 2026) found that 77% of employees paste company information into AI and LLM services, and 82% of them do it on personal accounts, not enterprise-managed tools. They&#8217;re pasting in email drafts, confidential negotiations, financial reports, customer data, and source code. For organizations relying on enterprise-managed tools as their data governance boundary, these numbers suggest the actual boundary may be elsewhere. The <a href="https://www.cybersecurity-insiders.com/ai-risk-and-readiness-report-2026/">Cybersecurity Insiders 2026 AI Security Report</a> corroborates the gap from the controls side: 92% of organizations lack semantic DLP (the kind that evaluates meaning, not pattern-matching), and 46% would fail to detect AI-rephrased sensitive content entirely.</p><h2>The Federal Government Signed Off Without Seeing It</h2><p>FedRAMP spent <a href="https://www.propublica.org/article/microsoft-fedramp-government-cloud-security">480 hours over three years</a> reviewing Microsoft&#8217;s Government Community Cloud High, ProPublica&#8217;s investigation revealed. Conducted 18 technical deep-dive sessions. Still couldn&#8217;t verify its fundamental security posture.</p><p>ProPublica&#8217;s investigation found that Microsoft failed to provide standard encryption documentation. Internal reviewers described the authorization package as &#8220;a pile of shit,&#8221; ProPublica reported. A former NSA computer scientist called it &#8220;security theater.&#8221; They approved it anyway, because agencies were already using it, ProPublica reported. FedRAMP reviewers checked compliance documents. They did not run independent scans of Microsoft&#8217;s encryption implementation. And the program doing the reviewing? FedRAMP now operates with <a href="https://www.propublica.org/article/microsoft-fedramp-government-cloud-security">roughly 24 staff and a $10M annual budget</a>, after DOGE personnel cuts, per ProPublica. When an AI platform called Woflow <a href="https://www.classaction.org/news/woflow-hit-with-class-action-over-march-2026-data-breach">got breached this month</a>, the class action was filed in 10 days.</p><h2>Your Audit Needs to Include the Agents</h2><p>The visibility gap isn&#8217;t just about employees using AI tools. It&#8217;s about AI tools using your systems. The <a href="https://www.cybersecurity-insiders.com/ai-risk-and-readiness-report-2026/">Cybersecurity Insiders 2026 AI Security Report</a> found that AI agents (software that takes actions autonomously inside enterprise systems, sending emails, modifying records, querying databases without a human approving each step) are already operating inside more than half of organizations surveyed.</p><p>The report, released March 16, found that 56% of organizations have real agentic AI exposure. Twenty-three percent operate shadow agents they don&#8217;t know about. Most cannot stop an agent mid-action (91%), and over half have granted agents write access to collaboration tools. If the audit does not enumerate agents with write access, the next incident response will begin with a tool no one authorized.</p><div><hr></div><h2>Regulatory Radar</h2><p><strong>NIST AI 800-4 (Published March 2026):</strong> The first federal framework addressing post-deployment AI monitoring. Acknowledges that validated methodologies do not yet exist, which means the standards are being written now, and organizations already mapping to them may set the baseline. This is likely to become an audit reference point. <a href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.800-4.pdf">Read the full report (PDF)</a>.</p><div><hr></div><h2>The Bottom Line</h2><ol><li><p><strong>Produce a one-page AI tool inventory memo.</strong> Three columns: tool name, access scope (who can use it and how), monitoring gap (what your current stack cannot see). Include managed subscriptions, known unmanaged usage, and agentic AI with system access. Name each gap explicitly -- the memo should make leadership uncomfortable, not reassured.</p></li><li><p><strong>Test your DLP against prompt-based data flows.</strong> Open ChatGPT in a browser on a managed device. Paste a block of test text. Check whether your SWG, CASB, or DLP stack logged the content of what was entered. If the answer is no, that is a finding, not a failure. Document it and escalate.</p></li><li><p><strong>Review your AI vendor agreements for data processing terms.</strong> Pull the DPA from every AI tool your organization uses. Check three things: where data is processed, whether prompts are stored or used for training, and what happens to data after session end. Enterprise AI vendors may update data processing terms between contract renewals. If you haven&#8217;t re-read yours since signing, the terms you agreed to may not be the terms in effect.</p></li></ol><div><hr></div><p><em>Powered by <a href="https://commonnexus.com/">Common Nexus</a></em></p>]]></content:encoded></item><item><title><![CDATA[Who Writes for You?]]></title><description><![CDATA[The CISO reads Dark Reading. The board reads Gartner. The CEO reads Morning Brew. You now have Exposure Brief.]]></description><link>https://exposurebrief.com/p/who-writes-for-you</link><guid isPermaLink="false">https://exposurebrief.com/p/who-writes-for-you</guid><pubDate>Sat, 21 Mar 2026 11:09:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Kmtf!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21bd290-d3a9-4b0c-b402-61df64aa352f_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The CISO reads Dark Reading. The board reads Gartner. The CEO reads Morning Brew. Each of them has an intelligence source calibrated to their decisions, their risk tolerance, their calendar.</p><p><strong>You now have Exposure Brief.</strong></p><p>This briefing is built for the person who gets the mandate without the playbook. The operations manager told to &#8220;figure out our AI policy&#8221; with no framework that covers prompt-based data flows. The IT lead asked to produce an audit of every AI tool in the organization when the monitoring standards <a href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.800-4.pdf">NIST AI 800-4</a> describes as &#8220;nascent&#8221; do not yet exist. The compliance coordinator handed a regulatory landscape where 1,561 AI bills have been introduced across 45 states, and the federal government is simultaneously trying to preempt all of them.</p><p>Across 800+ GRC and IT decision-makers surveyed by <a href="https://www.prnewswire.com/news-releases/optro-research-reveals-85-percent-of-enterprises-have-deployed-ai-but-only-25-percent-have-full-visibility-302715488.html">Optro</a> in March 2026, 85% of enterprises have deployed AI into core operations. Only 25% have visibility into how employees use it. The <a href="https://www.cybersecurity-insiders.com/ai-risk-and-readiness-report-2026/">Cybersecurity Insiders 2026 AI Security Report</a>, surveying 1,253 cybersecurity professionals, found that 73% have deployed AI tools but only 7% enforce security policies in real time. These numbers describe a structural gap between what organizations have adopted and what they can see, govern, or control.</p><p>That gap is the beat Exposure Brief covers. Every issue. One thesis, sourced and cited, with actionable findings at the end.</p><div><hr></div><h2>What This Briefing Does</h2><p>Each issue covers what regulators publish, what enforcement actions reveal, what practitioners report from the ground, and where operational risk concentrates:</p><ul><li><p>Identifies one narrative theme connecting the day&#8217;s developments</p></li><li><p>Sources claims to primary documents (federal publications, original research, court filings) where available and flags secondary sources for upgrade when a primary citation is available</p></li><li><p>Names specific counts, scopes, and timeframes rather than vague assertions</p></li><li><p>Closes with actionable findings specific enough to execute before lunch</p></li></ul><p>If a claim cannot be sourced, it is either hedged as analysis or removed. Statistics carry their methodology. Quotes carry their attribution. Each issue passes through automated source verification, editorial scoring across seven dimensions, and editorial review for unsupported claims and attribution gaps before it reaches you. The reader should be able to cite any finding in this briefing to their leadership with the source attached.</p><div><hr></div><p>Exposure Brief is published by <a href="https://commonnexus.com/">Common Nexus</a>, a company focused on data sovereignty and AI governance.</p><div><hr></div><p>This briefing arrives before your first meeting. The news cycle determines the thesis. The editorial pipeline determines the quality. What stays constant: one narrative, sourced evidence, and findings you can act on the same day you read them.</p>]]></content:encoded></item></channel></rss>