Case Study

How Microsoft 365 Copilot builds customer trust with responsible AI

With EY’s help, a leading AI solution became ISO 42001 certified and enhanced its responsible AI practices.

1

The better the question

How does confidence in AI accelerate adoption?

Putting responsible AI practices to the test for Microsoft 365 Copilot.

Artificial intelligence (AI) has become a driving force behind how businesses innovate, operate and thrive in today’s dynamic business landscape. Microsoft is at the forefront of this transformation, equipping organizations to unlock unprecedented opportunities with AI-powered solutions. This is evident by the launch of Microsoft 365 Copilot, an AI-powered tool that provides real-time intelligent assistance to enable users to enhance their creativity, productivity, and skills.
 

Since its launch in November 2023, Microsoft 365 Copilot has become one of the most widely recognized and adopted AI-powered solutions in the market by integrating into the daily workflows of millions of users across industries. Microsoft 365 Copilot’s exponential adoption globally would not have been possible without Microsoft’s unwavering commitment to developing trusted AI solutions. Without trust, AI struggles to gain the user confidence needed for meaningful adoption.
 

Standards to demonstrate trust, including ISO/IEC 42001:2023 (“ISO 42001”) (via EY.com US), which is the only certifiable and auditable responsible AI development standard available today, aim to provide a framework for the responsible design and development of AI solutions. Being able to trust AI solutions is top of mind for decision-makers across industries. In the EY AI Pulse Survey from December 2024 (via EY.com US), 61% of senior leaders affirmed that they are increasingly focused on responsible AI, up from 53% just six months prior, and they expect the topic to be even more relevant to them in the next six months. More recently, the Global EY Responsible AI Pulse Survey revealed that while 72% of executives report having integrated AI across enterprise initiatives, only one-third have implemented controls to govern AI.
 

To underscore its commitment to responsible AI, Microsoft turned to Ernst & Young LLP, a long-time adviser and alliance partner. The goal was to evaluate the responsible AI practices applied to Microsoft 365 Copilot and collaborate on opportunities to further enhance existing practices for improved customer trust, engineering efficiency and initial regulatory readiness. EY teams in the US assembled a multidisciplinary team of practitioners with an understanding of AI technologies and responsible AI practices to match the strength of Microsoft’s engineering and compliance teams.
 

“Achieving ISO 42001 certification for Microsoft 365 Copilot allows us to demonstrate the application of an industry leading AI risk management framework,” said Oliver Bell, GM Trusted Platform for Microsoft. “But for us, it was never just about checking the compliance box. Our collaboration with EY teams was driven by a shared commitment to putting responsible AI practices into action, strengthening customer trust and continuously improving how we build and deliver AI to customers at scale.”
 

While ISO 42001 served as a catalyst, the real value of the collaboration lay in scaling responsible AI from principle to practice — embedding it deeply into the organization’s design and delivery processes.

Cropped shot of a businessman and businesswoman using laptop together in a modern office
2

The better the answer

An evaluation focused on future-proofing capabilities

While ISO 42001 presented an immediate need, the collaboration was focused on scaling responsible AI practices.

Since 2016, Microsoft has dedicated hundreds of engineers, lawyers, and policy experts to establish a foundation for its responsible AI practices.

Building on that foundation, EY teams have rigorously evaluated and tested Microsoft 365 Copilot AI features against ISO 42001 requirements, ultimately resulting in Microsoft 365 Copilot successfully achieving ISO 42001 certification in March 2025. This exercise allowed EY to systematically validate how Microsoft’s responsible AI principles were embedded into the design and day-to-day operations of Microsoft 365 Copilot, demonstrating not just compliance, but resilience and readiness at scale.

EY’s evaluation surfaced key themes that demonstrated Microsoft’s preparedness for ISO 42001 certification offering actionable insights on how to advance their own responsible AI journeys:

Operationalizing responsible AI policy in measurable steps

To bring responsible AI policy to life, its principles must be translated into clear, actionable guidance that engineering teams can apply in practice. At Microsoft, this translation happened through a structured impact assessment process for AI features supporting Microsoft 365 Copilot. Following Microsoft’s Responsible AI Standard, these assessments prompted teams to anticipate potential risks to stakeholders and define appropriate mitigations. The process is grounded in policy-aligned questions and supported by practical tools such as software development kits for user feedback, safety filters to block harmful content, and secure AI Application Program Interfaces for approved large language models (LLMs) that help teams meet responsible AI requirements. EY teams examined how these assessments were used to embed policy into product development, offering a lens into how Microsoft operationalizes responsible AI in ways that are measurable, repeatable and scalable.

Evaluating harms in the context of the AI features

Understanding how AI systems behave under pressure is essential to building resilient and trustworthy solutions. For Microsoft 365 Copilot, this involved evaluating potential risks, such as the generation of ungrounded or harmful content or vulnerability to jailbreak attempts, within the context of each feature’s intended use. Microsoft conducted simulated harms evaluations to anticipate how these risks could emerge and to design layered defenses that mitigate them. EY teams validated that risk evaluation was embedded into the development lifecycle, ensuring AI features were proactively stress-tested against adversarial risks before release to safeguard users from real-world threats upon go-live. 

Implementing safety systems to perform responsible AI monitoring at scale

LLMs can occasionally generate content that poses risks to users or organizations. To proactively address this, Microsoft 365 Copilot incorporated multiple layers of AI safety systems. These include classifiers that detect potentially harmful prompts or outputs and trigger mitigations such as suppressing unsafe responses or redirecting the user. Microsoft also uses metaprompting to shape system behavior in line with responsible AI principles, such as avoiding speculation or emotional inference when summarizing meetings. EY teams assessed how AI safety systems were embedded into the product’s architecture, verifying that Microsoft 365 Copilot’s design included layered safeguards like classifiers and metaprompting confirming that Microsoft’s approach to responsible AI is both intentional and technically robust. 

Continuously monitoring AI features in production

Responsible AI doesn’t end at deployment. It demands continuous oversight. Microsoft 365 Copilot is actively monitored in production to validate that it performs reliably and safely, aligned with policy expectations. Engineering teams track a range of metrics, including success rates, uptime, accuracy and indicators of misuse such as jailbreak attempts. These metrics feed into intelligent alerting systems that detect anomalies in real time, enabling rapid response from on-call teams. EY teams examined telemetry pipelines and alerting mechanisms to ensure continuous monitoring was not only in place but actionable, enabling responsiveness to deviations in AI behavior.

Keeping humans at center in the responsible AI equation 

Even the most advanced AI systems require human judgment to responsibly guide their development. At Microsoft, this role is championed by designated responsible AI leads embedded within product teams and empowered to oversee risk management throughout the AI lifecycle. These individuals collaborate with engineers to assess how features will be used, anticipate potential risks and apply lessons learned from past deployments. They also serve as a bridge to Microsoft’s Office of Responsible AI, ensuring consistent governance across teams. EY teams confirmed that responsible AI champions were strategically embedded across product teams, empowering them to guide risk-aware development and uphold governance throughout the AI lifecycle. Research conducted in collaboration between Ernst & Young LLP and the Saïd Business School at the University of Oxford, shows that placing humans at the center of major transformations increases the likelihood of success by a factor of 2.6 times compared with transformations that do not prioritize a human-centric approach. 

While ISO 42001 certification was a key milestone, it marked the beginning—not the end—of Ernst & Young LLP’s collaboration with Microsoft. Drawing upon the experience of EY teams operationalizing responsible AI at scale and Microsoft’s leadership in the responsible AI arena, Microsoft’s M365 Trusted Platform Team and EY professionals worked closely with Microsoft’s compliance, legal, responsible AI and engineering teams.

Together, they explored strategies to reinforce existing responsible AI practices, future-proof their approach and embed responsible AI into the pace of innovation.

With these insights and feedback, EY teams defined the following guiding principles to sustain and scale responsible AI practices: 

  • Embedding responsible AI into day-to-day workflows to build operational muscle memory 
  • Continuously validating the design and effectiveness of controls 
  • Equipping teams with tools and templates to accelerate adoption 
  • Capturing real-world feedback from users to inform improvements 
  • Delivering ongoing training to keep pace with evolving risks and requirements 

 These principles were translated into tangible initiatives, to make sure responsible AI wasn’t just a policy, but a practice. By investing in these areas, Microsoft strengthened its ability to deploy AI responsibly, adapt to emerging regulations, and build lasting trust with users. 

“Operationalizing trust in AI requires more than a checklist—it demands a system of accountability that evolves with both the technology and the people who use it,” said Andrea Craig, Principal in Technology Consulting at Ernst & Young LLP. “Our work with Microsoft focused on building that system—one that’s resilient, scalable, and designed to adapt as AI capabilities and customer expectations continue to evolve.”

Businessmen and woman sitting at round table in cafe, using laptops, discussing and planning
3

The better the world works

The trust multiplier effect

This collaboration supports responsible use of AI for millions of Microsoft 365 Copilot users.

Since 2023, nearly 70% of Fortune 500 companies have integrated Microsoft 365 Copilot into their daily workflows, and their employees can now perform their daily tasks more effectively and efficiently.This remarkable scale reflects not only Microsoft’s innovation but also its commitment to building AI responsibly—with Ernst & Young LLP as a trusted partner in that journey.

“At Microsoft, we truly believe that earning and keeping our users’ trust is what gives us permission to build cutting-edge AI functionality,” said Drena Kusari, Microsoft VP & GM, Shared Services & RAI. “We pride ourselves in putting equal thought into the functionality of our features as we do into the responsible AI practices that encourage lasting interactions with Microsoft 365 Copilot.”

Microsoft 365 Copilot is now one of the few AI solutions globally to achieve ISO 42001 certification. That means the benefits of this achievement extend beyond Microsoft—they will be democratized for organizations that have already or are planning on deploying Microsoft 365 Copilot. Organizations deploying Microsoft 365 Copilot can now accelerate their own compliance efforts by leveraging a solution that’s been rigorously tested, independently validated by Microsoft’s external ISO 42001 auditor, and built with responsible AI considerations at its core. And whether those roles are administrative or core to the mission of driving value, employees that use Microsoft 365 Copilot gain a powerful accelerator, in which rote tasks are handled more quickly and reliable information and outputs are merely seconds away.

This is the multiplier effect of trust, when responsible AI is done right, it doesn’t just protect — it accelerates adoption. AI enables innovation at scale, builds confidence with users, and prepares organizations to lead in an AI-driven future.


Contact us

Like what you’ve seen? Get in touch to connect with our specialized teams and learn more.

Explore case studies

Learn how EY teams help our clients solve their toughest issues and shape their future with confidence.