An aerial view of a blue beach on Menorca island, showcasing paddleboarders in the water

Safeguarding privacy in AI: key challenges and practical solutions


Explore the critical privacy challenges in AI and discover practical solutions to address them effectively.


In brief

  • The importance of privacy within AI can be safeguarded through the interplay between the GDPR and the AI Act.
  • The AI Act and the GDPR differ significantly in their purpose and scope, which may result in potential conflicts between key principles of both regulations.
  • Discover a proactive and practical approach to ensure compliance and protect privacy.

Artificial Intelligence (AI) is revolutionizing industries and society, offering unprecedented opportunities to solve complex problems. However, these advancements come with significant privacy challenges. The risk of AI misusing data and infringing on privacy cannot be understated, especially given the current lack of comprehensive understanding of its long-term effects. Just as brakes make it possible to drive safely at high speeds, robust regulatory frameworks enable organizations to operate securely and confidently in the rapidly evolving landscape of AI. These frameworks enable the development of AI technologies while protecting personal privacy and human rights.




The interplay between GDPR and the AI Act is crucial for safeguarding privacy in AI systems, despite their differing objectives and scope.




The interplay between the GDPR and the AI Act

In the European Union, personal data privacy has been safeguarded since 2018 under the General Data Protection Regulation (GDPR). This regulation ensures that personal data is processed transparently and lawfully, emphasizing the need to protect individuals. The AI Act (AIA), becoming fully effective in 2026, aims to establish clear requirements for AI systems to respect fundamental rights, including privacy. However, the interplay between the GDPR and the AIA raises significant challenges due to their differing objectives and scopes.

 

Key privacy challenges in AI systems

  1. Lawfulness and fairness: AI systems require a clearly defined and legitimate purpose early in development, which conflicts with general-purpose AI solutions. The choice of the appropriate legal basis for processing must align with GDPR principles.

  2. Transparency: AI systems tend to be black boxes, making it difficult to be transparent towards data subjects.

  3. Purpose limitation & data minimization: AI systems training requires the use of as much data as possible, which conflicts with the GDPR's purpose limitation and data minimization principles.

  4. Accuracy and storage limitation: AI systems may struggle with accuracy and necessitate data retention for retraining or transparency purposes.

  5. Integrity, confidentiality, and accountability: Ensuring accountability becomes harder due to tensions with GDPR principles, requiring companies to justify and document decisions, and adapt to evolving guidelines.

  6. Automated Decision Making: According to Article 22 GDPR, individuals have the right not to be subject to decisions made solely by automated systems if these decisions significantly affect them. Hence, human monitoring is vital in AI systems.

  7. Data subject rights vs. AI model integrity: The GDPR grants individuals’ rights such as access to their data, correction of inaccuracies, and data deletion, which can conflict with the operational needs of AI systems. However, deleting individual data entries may compromise the performance or integrity of these systems, creating additional compliance challenges.

  8. Inference: AI's ability to infer new information from existing data raises significant privacy concerns. Even if an individual's data is not explicitly included in a training dataset, AI systems can generate identifying insights, a practice that falls outside the GDPR’s current scope on personal data.

  9. Governance Disparities: The GDPR and AIA assign roles and responsibilities differently, which can create inconsistencies and obstruct conformity regarding risk assessment and accountability.
     

A proactive and practical approach

To address these challenges, organizations must adopt a structured approach that balances the requirements of the GDPR and the AIA. This involves understanding the AI system's context, determining compliance obligations, and implementing requirements throughout the AI lifecycle.

  1. Understanding the context of AI systems: Define the problem the AI aims to solve, identify the data used for training and testing, determine the expected outputs, and assess how those outputs will be used.

  2. Determining GDPR and AIA requirements: Evaluate the requirements of the GDPR and the AIA separately, and ensure compliance with privacy-by-design principles.

  3. Implementing requirements in the AI lifecycle: Integrate GDPR and AIA requirements into the AI system's lifecycle, including design, development, and deployment phases.

  4. Project execution and monitoring: Continuously monitor the AI system's adherence to GDPR and AIA requirements, conduct regular audits, and stay informed about evolving regulations.
     

Conclusion

The interplay between the GDPR and the AIA in safeguarding privacy in AI systems presents significant challenges. However, by adopting a structured approach that separates and aligns the requirements of both frameworks, organizations can mitigate legal and operational risks while fostering innovation. As AI continues to evolve, proactive compliance strategies and ongoing monitoring will be essential to uphold privacy and protect fundamental rights in the age of AI.

For a deeper dive into this topic, download our whitepaper Privacy in AI: Enhancing Innovation to explore comprehensive insights and practical solutions.

Privacy in AI: Enhancing Innovation

Download our whitepaper to explore comprehensive insights and practical solutions.



Related articles

How EY is navigating global AI compliance: The EU AI Act and beyond

EY is turning AI regulation into a strategic advantage. Learn more in this case study.

10 Jan 2025

What financial services need to do better to accelerate GenAI adoption

The EY European Financial Services AI Survey reveals GenAI adoption struggles due to skill gaps, regulatory readiness, and ethical concerns.

05 Dec 2024 Dr. Patrice Latinne

What do employees in Belgium really think about AI?

The integration of artificial intelligence (AI) into the workplace is significantly affecting employees across various sectors in Belgium.

28 Jun 2024 Andy Deprez

    Summary

    Explore the critical privacy challenges in AI and discover practical solutions to address them. This article highlights the interplay between GDPR and the AI Act, key privacy challenges in AI systems, and a proactive approach to ensure compliance and protect privacy.


    About this article

    Authors

    You are visiting EY be (en)
    be en