EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
-
Explore our global insights that look at how you can build confidence in AI, drive exponential value throughout your organization and deliver positive human impact. Learn more.
Read more
This work emphasizes the importance of early-detection tools for businesses to safeguard trust and enhance the ethical use of AI. This research paper was accepted at the TrustNLP workshop during the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL).
The second study, Mitigating Social Biases in Language Models through Unlearning, explored state-of-the-art methods for reducing bias in LLMs without the computational cost of retraining. Among the approaches tested, negation via task vector (TV) emerged as the most effective one. It reduced bias in some models by up to 40% while preserving performance and providing the flexibility to adapt to specific needs.
Direct preference optimization (DPO) proved effective but was more computationally intensive, while partitioned contrastive gradient unlearning (PCGU) demonstrated potential but required refinement to ensure coherence and consistent results.
These findings underscore the trade-offs between various techniques and highlight TV’s scalability and adaptability as a standout option for organizations seeking to balance fairness and operational efficiency. This work was accepted at the TrustNLP workshop at the 2024 NAACL conference and the 2024 Empirical Methods in Natural Language Processing (EMNLP) conference as part of the Industry Track.
Together, these studies provide organizations with a robust understanding of AI bias and offer practical tools to address it. By implementing the insights and techniques developed through this collaborative research, businesses can make their AI systems ethical, transparent and aligned with their long-term strategic goals. These advancements position forward-thinking companies as leaders in responsible AI innovation.
This research represents a collaborative effort by seasoned AI professionals. Contributors to BiasKG include Chu Fei Luo and Faiza Khan Khattak of the Vector Institute, Ahmad Ghawanmeh of EY, and Xiaodan Zhu of Queen’s University .
The Machine Unlearning study was authored by Omkar Dige and Faiza Khan Khattak of the Vector Institute, Diljot Singh, Tsz Fung Yau and Mohammad Bolandraftar of Scotiabank, Qixuan Zhang from EY, and Xiaodan Zhu of Queen’s University.
The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organization or its member firms.