As AI technologies continue to evolve, organisations must proactively address ethical considerations to ensure that their systems are not only compliant but also aligned with societal values and human rights. Organisations must move beyond mere compliance and embrace responsible AI practices. This involves creating a culture of ethical awareness, having a diverse team where possible, and actively engaging with the societal implications of AI technologies. By doing so, organisations can develop AI systems that are trustworthy, fair, and beneficial to all stakeholders. Embracing responsible AI practices will not only enhance compliance but also build public trust and drive sustainable innovation in the AI landscape.
Respondents to our survey exhibit high confidence in the safety and performance of AI systems within their organisations. Interestingly, 80% reported having moderate to strong controls in place to ensure AI systems perform at a high level of precision and consistency, while 77% said they have controls to ensure AI use is consistent with permitted rights and confidentiality. This reflects the capabilities of the surveyed organisations, all with annual revenues exceeding $1 billion that enables them to invest in such controls.
In another encouraging finding, 46% of the organisations have an established AI ethics policy in place.
However, there is mounting evidence globally of an emerging trust gap in relation to the technology and the uses to which it is being put. Consumers may be accepting of the technology and even enthusiastic in relation to some of its applications, but they are still concerned about some aspects of its usage and impacts.
Nearly two in three respondents (63%) to the global EY Responsible AI Pulse survey think their organisations are well aligned with consumers on their perceptions and use of AI. Yet, the findings from the 15-country1 EY AI Sentiment Index survey of 15,060 consumers found that this was not the case. Consumers are twice as likely to worry that companies will fail to uphold RAI principles than CxOs. This includes concerns around the degree to which organisations fail to hold themselves accountable for negative AI use (58% consumers vs 23% executives) as well as organisations not complying with AI policies and regulations (52% consumers vs 23% executives).
Interestingly, Irish executives demonstrate some awareness of this trust gap, at least to a certain extent. Less than half (43%) of the respondents to the Irish survey said consumers trust companies in their sector to manage AI in a way that aligns to their best interests.
In an increasingly polarised world, trust and transparency are paramount and can be a source of competitive advantage. However, it is clear that consumers lack trust in companies to act responsibly with AI. But companies can change this perception and gain an advantage in the market by developing and embedding responsible AI practices and communicating these to customers.
In this context, it must be understood that responsible AI is about more than just compliance. It’s about organisations building and maintaining trust with their most important stakeholders, their customers, their employees, their regulators, their investors and everyone within the ecosystems they operate.
This is more important than ever as concerns about reliability, bias, and compliance persist. In this context, organisations must develop their own comprehensive Responsible AI (RAI) governance frameworks. These frameworks should go beyond principles and clearly outline to employees what steps they should consider in order to implement AI responsibly.
As our survey has found, all companies are looking to adopt AI. Undoubtedly how quickly and effectively they can do so will be important, but the long-term benefits of adopting responsible AI frameworks, principles and practices should not be overlooked.