Managing hallucination risk in LLM deployments at the EY organization

This technical paper explores why hallucinations occur, their implications for professional services, and practical strategies to reduce risk across the artificial intelligence (AI) pipeline.

Large language models (LLMs) are transforming service delivery and operations, but they introduce a critical challenge: hallucinations — outputs that are factually incorrect yet presented with high confidence. In high-stakes domains such as tax, audit and risk advisory, these inaccuracies can lead to compliance failures, reputational damage and regulatory exposure.

This paper provides a structured approach to reducing hallucination risk within the artificial intelligence (AI) pipeline, aligning technical strategies with governance and ethical frameworks. It offers practical implementation guidance to promote reliability, safeguard compliance, and reinforce client trust as organizations scale generative AI (GenAI) for complex, high-value use cases.

Download this resource