Blog

Copilot for Microsoft 365 Risk Assessment QuickStart Guide Explained

3 min read
Copilot for Microsoft 365 Risk Assessment QuickStart Guide Explained_hero_banner
3 min read

A few weeks ago, Microsoft released an important guide to its Copilot users – a document that helps organizations carry out a risk assessment of Copilot for Microsoft 365.  



To put it simply, the QuickStart Guide provides an overview of the potential AI risks and how these risks are mitigated for Copilot for Microsoft 365. The message here is loud and clear - while Microsoft owns the underlying infrastructure, there is a lot left to the customers to secure their own data.  

What does the guide say? 

According to Microsoft, this QuickStart Guide aims to assist organizations in performing a comprehensive risk assessment of Copilot for Microsoft 365 and serves as an initial reference for risk identification, mitigation exploration, and stakeholder discussions. Organizations that want to perform a risk assessment as part of their due diligence process can rely on this document to get started. 
At Rencore, we read the 39-page QuickStart Guide (so that you don’t have to) and have summed up the key messages from this guide. To begin with, let's look at the scope of this document. The QuickStart Guide covers three major aspects:  

  • AI Risks and Mitigations Framework
  • Sample Risk Assessment 
  • Additional Resources 

The AI Risks and Mitigations Framework outlines the primary categories of AI risks and how Microsoft addresses them – at a company level and service level. Secondly, the guide demonstrates a sample risk assessment by presenting a set of real customer-derived questions and answers that help users assess their risk posture. Finally, the guide also provides additional resources that help organizations learn about AI risk management in more detail.  

The guide further elaborates on several AI risks and their corresponding mitigations, such as bias, disinformation, overreliance and automation bias, ungroundedness (hallucination), privacy, resiliency, data leakage and security vulnerabilities.

Shared responsibility model 

After months of effort driving Copilot adoption in organizations, Microsoft now seems to consider the security concerns Microsoft 365 users have expressed from the start. With the release of this new risk assessment guide, Microsoft seems to be acknowledging that there’s a lot to be done between purchasing Copilot for Microsoft 365 and adopting the tool. This is no doubt a significant step in ensuring the security and compliance of AI-driven solutions. 

Microsoft proposes something called AI Shared Responsibility Model when it comes to Copilot adoption and usage. What does this essentially mean? In simple terms, the shared responsibility model acknowledges that users of Copilot also have an inherent responsibility towards ensuring that all usage and collaboration using Copilot remains secure.  

Microsoft maintains that many risks can be mitigated by appropriate and responsible use and encourages customers to train their users in ‘understanding the limitations and fallibility of AI’. Depending on the options customers choose at the time of implementing Copilot, customers take responsibility for different parts of the necessary operations and policies needed to use AI safely. 

Copilot quickstart guide intext 1

The diagram illustrates the areas of responsibility between customer and Microsoft according to the type of deployment. 

While carrying out a risk assessment, Microsoft urges customers to keep the shared responsibility model in mind. Why is this the case? Because it helps customers accurately identify risks that are to be mitigated by Microsoft and by their own organization.  

All roads lead to governance  

The QuickStart Guide serves as an initial reference for organizations, allowing them to prioritize risk identification, formulate mitigation strategies, and initiate stakeholder discussions. Microsoft plans to make regular updates to the Security Development Lifecycle (SDL) to keep it aligned with threats as they evolve. To better account for AI risk, including generative AI, this will be updated as new risks emerge.  

However, the underlying goal is for organizations to collaborate efficiently and securely. For this, it is not merely enough to look at security and risk assessment as standalone pillars, but to consider overall cloud collaboration governance. Is your organization Copilot-ready

To know more about governing AI tools such as Microsoft Copilot, we recommend you download our whitepaper Regulating your AI companion: Best practices for Microsoft Copilot governance. 

Subscribe to our newsletter