Go to content
We are the #1 Microsoft partner
#1 Microsoft partner of NL
Console Courses Working at (NL)

DPIA update Copilot: from red flag to controlled adoption

Blogs
Artificial Intelligence
Femke Cornelissen
16-9-2025
This article is automatically translated using Azure Cognitive Services, if you find mistakes, please get in touch

Microsoft is listening. And they deliver. For Europe. For the Netherlands. And that is important. The new DPIA (Data Protection Impact Assessment) on Microsoft 365 Copilot marks a tipping point. Where previously the advice of SURF and SLM Rijk was strict and clear. Not to use Copilot for the time being, that advice has now shifted to starting cautiously, with conditions. The red flag is gone. The light is orange. Organizations now have room to use Copilot responsibly. 

What is a DPIA – and who does this research? 

A DPIA is a mandatory risk analysis under the GDPR as soon as processing of personal data is likely to pose a high risk. Think of it as a software privacy APK: 

  • What personal data is processed?  
  • What risks are there? 
  • What measures should be taken? 

There are two parties that play a key role in the DPIA: 

  • SURF – for higher education and research. 
  • SLM Rijk – for the central government and ministries. 

Their earlier advice was a clear brake. Their new yes, provided is a starting signal. 

From red to orange: the improvements 

Microsoft has made five interventions that make this possible: 

  1. Diagnostic Service Data – minimum retention period 18 months and after that it can be deleted. The data that Copilot needs to function. You can set this yourself via purview.  

  2. Transparency in DSARs – access requests are now understandable and useful. 

  3. Required Service Data – >180 categories of events with purpose, retention period and context. 

  4. EU Data Boundary – phase 3 completed: all data, including support and log files, will remain in the EU/EFTA. 

  5. Advice SURF and SLM Rijk – from "no commitment" to "responsible start under conditions." 

Why this is a tipping point

  • Checks and balances work. Critical questions from SURF and SLM Rijk have led to real improvements.
  • Microsoft sets the standard in AI transparency. Where many AI solutions remain a black box, Microsoft offers documentation, agreements and transparency. 
  • The responsibility is shifting. The foundation has been laid. Now organizations have to put governance, policy and adoption in order themselves. 

My vision: why Microsoft is unique in this

I work daily with organizations that are considering the step to AI. And I always see the same pattern: most AI tools are impressive in terms of functionality, but completely opaque in their operation. They do not provide insight into what data is processed, where that data goes, and what risks you run. That essentially makes them a black box. For a CIO or CISO, that's an unacceptable risk. Microsoft does things differently. 

  • They are willing to subject their services to DPIAs and cooperation with parties such as SURF and SLM Rijk.
  • They provide documentation at a level of detail (>180 categories of Required Service Data) that you don't see elsewhere.
  • They are structurally building Responsible AI, with policy, tooling (DLP, MIP, Purview) and an EU Data Boundary that gives regulators confidence. 

That's precisely why I can say: Copilot is the most secure and enterprise-ready AI solution in Europe today. Where others say: "Use at your own risk", Microsoft says: "Use, but with frameworks, governance and control." 

Advice to CIOs, CISOs and IT PROs 

The red flag is gone, but without a plan, starting is irresponsible. This is my advice: 

  1. Start by defining scenarios where AI will be used and determine the risks for each scenario. This way you can quickly start with scenarios that pose few risks.  

  2. Work from policy – draw up an AI policy based on existing frameworks (Algorithm Framework, Generative AI Guide). 

  3. IAM in order – organize a Data Access Reset: clean Teams and SharePoint, close guest access, secure role-based access. 

  4. Data classification and labeling – implement MIP labels and link DLP rules. 

  5. Monitoring and auditing – use Purview Audit and Insider Risk Management to monitor Copilot usage. 

  6. Training and adoption – train employees not only in prompts, but also in responsible use. 

Wortell's roadmap: from vision to practice 

At Wortell, we have developed a clear approach to guide organizations in this: 

  1. Inspire – executive briefings, translating DPIA results into practice, showing use cases. 

  2. Innovate – start with pilots, clean up IAM, implement labeling and DLP, train employees. 

  3. Implement – secure phased rollout, monitoring and governance. 

  4. Improve – maturity scans, translate lessons learned into policy, continuously improve adoption. 

Why Wortell? Because we combine experience: 

  • Technical: IAM, MIP, DLP, Purview, EUDB. 
  • Security & compliance: DPIAs, GDPR, governance. 
  • Adoption & change: training, awareness, culture. 

This makes us the partner to make organizations Copilot-ready safely and responsibly. 

Conclusion 

The DPIA update is not a detail. It is a tipping point. 

  • SURF and SLM Rijk now say: a responsible start is possible. 
  • Microsoft has made improvements that are unique in the market. 
  • Copilot is the most transparent and controlled AI solution available today. 

My message to leaders: The conditions are there. It's time to make choices. With policy, governance and training, you can use Copilot responsibly – and let the Netherlands lead the way in AI adoption. This is no longer a brake. This is a starting shot. 

Our author

Femke Cornelissen