While you are reading this article, your employees are already using generative AI services, whether you like it or not. They're uploading, potentially sensitive, company data into ChatGPT or similar services. Creating document summaries, asking for business strategies, improving management presentations, or simply running a basic and old school grammar check.
To the average employee, this all seems harmless—but you know better. Sensitive and valuable company data is being shared with public generative AI tools, which may then be used to train their large language models (LLMs). Ever wondered where those Studio Ghibli-style images come from?
You can’t blame your colleagues. Generative AI services allow them to be more productive, creative and innovate. ChatGPT is incredibly easy to use, very friendly for an non-human and delivers impressive results. What isn’t there to like? We have entered the era of Shadow AI.
In this article, I'll show you how to keep an eye on Shadow AI in your organization and share some practical tips to help you keep your (sensitive) data safe.
Monitoring your sensitive data with Shadow AI
Microsoft Purview offers multiple services to help you monitor Shadow AI within your organization. Let’s start by taking a look at the Data Security Posture Management for AI.
Data Security Posture Management for AI
First start with opening Data Security Posture Management for AI in the Microsoft Purview Portal. The overview page gives you insights in sensitive information types shared with Microsoft Copilot and other AI apps. For example:
I’ll admit—I had to look up Claude and Perplexity. As you might’ve guessed, they’re both AI assistants. That said, keep in mind that these kinds of detections can sometimes be false positives. You’ll need to dig a little deeper to confirm whether sensitive data is actually being shared with public AI services. Still, the important takeaway here is that you now have clear evidence your employees are using generative AI tools. By clicking on view details, the activity explorer opens. More details are provided about these results. For example:
Each activity provides additional information. For example:
These results are generated by a Data Loss Prevention (DLP) policy.
Insider Risk Management
Insider Risk Management helps organizations identify and mitigate internal risks by using signals from across Microsoft 365 and other sources. It enables proactive detection of risky user behavior—such as data leaks, security policy violations, or shadow AI. The reports section provides an analysis of potential risky AI usage (preview) in your organization. For example:
IRM contains a preview policy to discover risky AI Usage:
I recommend customizing the policy to fit your organization’s specific needs instead of sticking with the default settings. Just a heads-up: this policy is still in preview. Microsoft might tweak the features or even make changes that break how it works—so it’s definitely something to keep in mind.
Defender for Cloud Apps
Microsoft Defender for Cloud Apps is a comprehensive Cloud Access Security Broker (CASB) solution that gives you deep visibility, control, and protection across your cloud apps and services. It integrates with Microsoft 365 and third-party applications, helping organizations spot shadow IT, prevent data leaks, and monitor risky user behavior across the cloud. Within Cloud Discovery, there’s a tab called Discovered Apps. One of the cool features here is a dedicated category for Generative AI. For example:
You can see more details by selecting a Generative AI App. For example:
You now have valuable insights to take appropriate action against the use of ChatGPT within your organization.
Communication Compliance
Microsoft Purview Communication Compliance is a solution designed to help organizations detect, investigate, and act on inappropriate or risky communications within their digital environments. There are seven policy templates available:
Each template includes a location selection option that covers Generative AI:
Once your policy starts detecting messages, you can review each one individually. For example:
Reviewers can use the notice feature to directly contact the user involved. This feature is available when you open a pending alert. You can either create a new notification template or use an existing one. For example:
After clicking the blue Save button, the business user receives the email—and ideally adjusts their AI-related behavior accordingly.
Protect your sensitive data for Shadow AI
We’ve gone over the different tools you can use to discover Shadow AI and monitor its usage in your organization. Now let’s take a look at how Microsoft Purview Data Loss Prevention (DLP) can help protect your sensitive data—especially with Endpoint DLP.
First, open the settings menu in the Microsoft Purview Admin Center. Here, you’ll need to configure your service domains and sensitive service domain groups. For example.
You use these choices in the rules of your Endpoint DLP policy. For example.
I recommend starting with a block and override approach before going all-in on full blocking. Be sure to communicate with your business users about why the policy exists and offer them alternative tools—Microsoft 365 Copilot, perhaps? And last but not least: don’t forget to install the Purview Chrome Extension. Without it, Endpoint DLP won’t work in Google Chrome.
Conclusion
We’ve focused on a selection of Microsoft tools and services you can use to monitor sensitive information in the context of Shadow AI. While these tools aren’t specifically built for Shadow AI, they’re highly effective for monitoring your broader Microsoft 365 environment—including SharePoint, OneDrive, Teams, and Microsoft 365 Copilot.
But protecting your data doesn’t stop with Endpoint DLP. We strongly recommend using sensitivity labels to classify and protect sensitive information across your environment.
Recently, Microsoft rolled out a preview feature that allows you to set Microsoft 365 Copilot as a policy location. For example:
Within the rule settings, you can now choose a sensitivity label that prevents Copilot from processing any content marked with that label. Before you deploy this policy, be aware of the following:
And with that—good luck on your AI and data security journey!
A note from Rencore
Establishing Data Loss Prevention (DLP) policies and applying sensitivity labels, as discussed in Jasper's blog, form a crucial foundation for effectively managing generative Shadow AI within your organization. That being said, there are still noticeable gaps in the IT lifecycle governance concerning all AI instances, particularly when integrated across your collaboration, low-code, and no-code applications.
At Rencore, we have gathered Insights from numerous customer interviews highlighting the importance of third-party tools that automate governance lifecycle management and more. These tools are essential for organizations to maintain control and empower users to utilize AI responsibly, with appropriate safeguards in place.
Rencore is a trusted partner for organizations, providing oversight of critical cloud collaboration, AI, and no-code/low-code workload environments. Our software empowers organizations to be secure, efficient, and cost-effective, ensuring the business remains competitive and well-prepared for future challenges.
To gain a comprehensive understanding of how Rencore can support you in navigating your entire AI governance journey, we encourage you to reach out to us today.