15 min read

Shadow AI in Microsoft 365: Find and Block It with Purview

Shadow AI leaks data without triggering a single alert. Use Entra Internet Access, Defender for Cloud Apps, and Microsoft Purview to find and block it in 4 steps.

An employee opens ChatGPT in their browser, pastes a contract, and asks it to summarize the key terms. No malware. No phishing link. No alert in your SIEM. Just a browser tab and company data that just left the building.

That’s shadow AI. Most organizations don’t find out they have a problem until after the data has already left.

On March 31, 2026, Microsoft made Entra Internet Access Shadow AI Detection generally available. It’s one of the least-talked-about GA announcements in the Microsoft security stack this year. This article walks you through what it does and how to pair it with Defender for Cloud Apps and Purview to close the gap, using Microsoft’s own 4-step deployment model.

Key Takeaways

  • Employees use AI tools for legitimate work: summarizing documents, rewriting text, debugging code. None of that looks malicious to your security stack.
  • Microsoft’s 4-step model: discover, block, enforce DLP, and govern. Each step requires different tools and licenses.
  • Entra Internet Access Shadow AI Detection went GA on March 31, 2026 and finds AI app usage at the network layer, before Defender for Cloud Apps sees it.
  • Local AI agents (like Claude Code) are an emerging blind spot that standard browser-based controls won’t catch.

Why does shadow AI bypass your existing security tools?

Standard security tooling looks for threats. Shadow AI doesn’t look like a threat. It’s HTTPS traffic to a legitimate domain that your firewall has no reason to block. ChatGPT, Claude, Gemini, Perplexity. All trusted TLS endpoints, all categorized as productivity tools by most URL filters. The people using them aren’t trying to cause an incident. They’re trying to get work done faster.

Your DLP rules likely watch for credit card numbers and patient identifiers moving to known file-sharing sites. They’re not watching for a Word document pasted into a chat interface. Your CASB sees sanctioned apps. It doesn’t see what people type into a browser tab outside those apps.

The gap isn’t a configuration error. It’s a category problem. Shadow AI is a new data egress channel that existing tools weren’t built to model.

The gap isn’t usually discovered during a security review. It shows up in an incident investigation, when someone asks where a specific document ended up and there’s no log entry that answers the question.

What does the Microsoft 4-step shadow AI model actually cover?

Microsoft published a full deployment model for preventing data leaks to shadow AI in Microsoft Learn. It addresses a gap that most organizations don’t have a structured answer for yet: which AI apps are running, which are blocked, and what happens to data that reaches an approved one. The four steps are: discover, block, enforce DLP, and govern. Each step builds on the previous one, and each requires a specific set of tools and licenses.

Microsoft 4-step shadow AI deployment model Step 1 Discover Defender for Cloud Apps + Entra Internet Access Step 2 Block Conditional Access + Intune session policy Step 3 Enforce DLP Purview DLP AI app category Step 4 Govern Purview audit logs + Comms Compliance Each step builds on the previous. Skipping discovery means enforcement has no data to act on.
Source: Microsoft Learn — Prevent data leak to shadow AI

Worth knowing upfront: this isn’t something you can configure entirely with standard M365 E3 or E5 licenses. Entra Internet Access requires the Microsoft Entra Suite or a Global Secure Access license. Defender for Cloud Apps app discovery requires M365 E5 or a standalone Defender for Cloud Apps license. Purview DLP for the AI app category works with the compliance add-ons included in E5 Compliance.

Step one: Discover what’s already running

You can’t block what you can’t see. The first priority is building a picture of which AI apps employees are actually using.

Defender for Cloud Apps has had app discovery for years. It ingests traffic logs from your firewall or endpoint agents and surfaces which cloud apps are in use, categorized by risk score. If you have M365 E5, you already have access to this. Go to the Defender XDR portal, navigate to Cloud Apps, and look at the Cloud Discovery dashboard. Filter by category for AI apps.

Entra Internet Access works at the network layer, which means it sees traffic before it even reaches Defender for Cloud Apps classification. It can identify AI app usage based on traffic patterns and domain resolution, giving you a discovery layer that’s faster and broader. This is what went GA on March 31, 2026. The Microsoft Tech Community announcement covers how to enable it from the Entra admin center under Global Secure Access.

In practice, organizations running both tools together see a meaningful difference in their discovered app count. Entra Internet Access tends to surface tools that never generate enough traffic to score highly in Defender’s cloud discovery, because users access them briefly and move on. Those are exactly the high-risk tools you’d otherwise miss.

Step two: Tag and block unsanctioned apps

Discovery without action is just a report. Once you know which apps are in use, you need to tag them in Defender for Cloud Apps and apply enforcement.

In the Defender XDR portal, go to Cloud Apps > Cloud App Catalog. Find the AI tools you’ve identified and set each one as Sanctioned (approved), Unsanctioned (blocked), or leave it untagged (monitored). Sanctioned apps get your normal access policies applied. Unsanctioned apps trigger a block.

The block itself happens via one of two enforcement paths:

  • Microsoft Entra Conditional Access with a Defender for Cloud Apps session policy. This intercepts the session and blocks access based on the app’s tag.
  • Intune with the Microsoft Tunnel or a managed browser that enforces the block policy at the device level.

Neither of these is click-and-done. You’ll need a working Conditional Access policy that routes specific traffic through the Defender for Cloud Apps proxy. The Sharegate guide on tackling shadow AI with Purview and Defender has a clear walkthrough of the session policy configuration.

Don’t block everything on day one. Start with the highest-risk tools: consumer AI apps with no enterprise terms, no data processing agreements, and no retention controls. Block those first. Everything else can sit in monitored status while you build the governance picture.

Step three: Stop sensitive data from leaving via Purview DLP

Blocking the app entirely isn’t always the right call. Some AI tools are genuinely useful for non-sensitive work. The better control for those cases is a DLP policy that allows access but blocks uploads of sensitive content.

Microsoft Purview now includes an AI app category in the DLP policy engine. This means you can write a condition that says: if a file or text contains content labeled Confidential or matches a sensitive information type, block the upload to any app in the AI app category. The category covers ChatGPT, Claude, Gemini, Copilot alternatives, and dozens of other AI services.

Here’s the minimal DLP rule to start with:

  1. In the Purview compliance portal, go to Data loss prevention > Policies > Create policy.
  2. Start from a custom policy (or use the AI template if it appears in your tenant).
  3. Set the location to Devices (requires Defender for Endpoint onboarding).
  4. Add a condition: Content contains sensitivity label “Confidential” OR a sensitive information type of your choice.
  5. Set the action: Block upload to cloud and restrict the scope to the AI app category.
  6. Set the mode to Audit first for two weeks before switching to enforce.

Audit mode is not optional here. You will get false positives, and you need to understand them before enforcement breaks someone’s legitimate workflow.

Based on common Purview DLP audit findings: the most frequent false positives in AI-category DLP policies come from internal training content and legal templates that carry Confidential labels but contain no actual sensitive data. Setting a secondary condition requiring both a label AND a specific sensitive information type reduces noise by a significant margin before you even touch the policy mode.

Step four: Govern and audit what you can’t block

Not every AI interaction can or should be blocked. Employees with approved AI tools are going to use them with work content. The question becomes: what went where, and can you reconstruct it if something goes wrong?

Microsoft Purview Communications Compliance can capture AI interaction logs if the AI tool sends responses back through a communication channel that Purview can intercept. For first-party Microsoft Copilot, this works well. For third-party tools, it depends entirely on whether those tools integrate with the Microsoft information protection stack.

For audit retention, check what you actually have. The default audit log retention is 180 days for all tenants. For post-incident investigations spanning more than 6 months, you’ll need Audit (Premium) with an E5 or E5 Compliance license, which extends retention to 1 year. Configure this in the Purview compliance portal under Audit > Audit retention policies before you need it.

An audit trail doesn’t replace a policy, but it gives you the evidence you need when the policy gets tested.

What about local AI agents? The new blind spot

Most security conversations about AI focus on browser-based tools. But there’s a growing category that doesn’t go through a browser at all: local AI agents.

Tools like Claude Code, OpenClaw, and similar developer-facing agents run locally on the endpoint and make API calls directly from the machine. They don’t appear in your cloud discovery logs. They don’t hit a URL your proxy can intercept. They’re a different class of shadow AI entirely.

On May 1, 2026, Microsoft made Agent 365 generally available. Part of its capability set includes discovery and runtime blocking of local AI agents on managed endpoints. If you have Defender for Endpoint deployed with full EDR coverage, you can use Agent 365 policies to detect when a local AI agent process is running and either alert or terminate it based on policy.

This is worth watching. Local AI agents are going to become the next wave of shadow AI as AI-native developers bring their own tools into corporate environments. The fact that Microsoft shipped agent-level controls at GA suggests they’re seeing the same pattern.

The interesting risk here isn’t just data exfiltration. Local AI agents can also read local file system content and make decisions about it autonomously. An agent with access to a developer’s local repo and an API key to an external model can extract source code without any browser traffic appearing at all. Standard DLP, CASB, and proxy controls have zero visibility into that path. Agent 365’s endpoint-level detection is the only current Microsoft control that addresses it.

Do you have the right licenses for all of this?

Before you build a deployment plan, check what you actually have. Shadow AI controls in Microsoft’s stack aren’t uniform across license tiers, and the gaps between E3, E5, and add-ons are bigger than most organizations expect when they start.

CapabilityMinimum license
Defender for Cloud Apps app discoveryM365 E5 or Defender for Cloud Apps standalone
Entra Internet Access Shadow AI DetectionMicrosoft Entra Suite or Global Secure Access add-on
Purview DLP - AI app categoryM365 E5 Compliance or Purview add-on
Agent 365 local agent detectionDefender for Endpoint Plan 2 (included in E5)
Communications Compliance (AI logs)M365 E5 Compliance
Shadow AI capabilities per license tier E3 E5 E5 Compliance Cloud App Discovery Shadow AI Detection (Entra IA) Purview DLP (AI app category) Agent 365 (local AI detection) Communications Compliance Included Add-on required Not available Shadow AI Detection requires GSA add-on on all tiers.

If you’re on M365 E3 today, you have meaningful gaps. The most accessible starting point is Defender for Cloud Apps app discovery via endpoint agent (available with Defender for Endpoint Plan 1, which is included in Business Premium and above). That at least gets you the discovery layer before you invest in the enforcement stack.

FAQ

These are the questions that come up most often when organizations start working through this deployment model.

Does Entra Internet Access Shadow AI Detection require Global Secure Access agent on every device?

Yes. Entra Internet Access works via the Global Secure Access client, which must be deployed to managed endpoints. The client routes traffic through Microsoft’s network edge where the AI detection logic runs. Devices without the client are invisible to this control. Deployment is via Intune for managed Windows devices.

Can Purview DLP block content pasted directly into a chat box, not just file uploads?

Yes, with endpoint DLP enabled and Defender for Endpoint deployed. Endpoint DLP can detect sensitive content being pasted into browser inputs on monitored devices, not just file uploads. Microsoft’s deployment guidance is explicit on this: the Defender for Endpoint sensor must be active and the DLP policy location set to Devices, not just Exchange or SharePoint.

What’s the difference between tagging an app as Unsanctioned and creating a Conditional Access block?

Tagging an app as Unsanctioned in Defender for Cloud Apps marks it for enforcement, but the actual block only happens if you have a session policy or Conditional Access policy that acts on that tag. The tag itself is not a block. It’s a signal. The Conditional Access policy is what turns that signal into a denied session. Both pieces must be in place.

Will blocking shadow AI tools create user pushback?

Probably, yes, especially for tools people have been using for months without restriction. The most effective pattern is: audit mode first (two to four weeks), then targeted outreach to heavy users of blocked tools, then enforcement with a clear alternative (like Microsoft Copilot or an approved AI assistant). Enforcement without communication creates shadow IT of a different kind: users who find workarounds.

How do I find out if we’ve already had a shadow AI incident?

Start with the Defender for Cloud Apps Cloud Discovery dashboard. Filter activity logs for AI app categories and look for large data transfers, repeated uploads, or activity from accounts that have access to sensitive content. If you have Audit Premium enabled, you can query the unified audit log for file downloads from SharePoint shortly followed by cloud uploads to AI-category domains from the same user. That correlation is the clearest signal of a past incident.


Shadow AI isn’t a future risk. It’s already running in most tenants. Some never find out what hit them.

The four steps in Microsoft’s deployment model (discover, block, DLP, govern) are the right structure. Start with discovery, because you can’t protect what you can’t see. Add blocking for the highest-risk tools. Build the DLP layer for everything else. And set up audit retention before you need it, not after.

If you do nothing else today, go to the Defender XDR portal, open Cloud Discovery, filter by AI app category, and look at what’s there. What you find will tell you exactly how urgent the rest of this is.

← All articles