How responsible AI practices can foster competitive advantage

To safely harness AI’s transformative power, Irish organisations need to shift the focus towards responsible AI integration. AI isn’t a fixed destination, it’s a departure. For your business. And beyond.


In brief

  • All organisations surveyed have either adopted AI to some extent or plan to do so.
  • 66% claim to be engaged in the development of employee training programmes to build skills, best practices and risk awareness related to emerging AI technologies.
  • Building trust by adopting a comprehensive responsible AI framework is crucial as concerns about reliability, bias, and compliance persist in AI integration.

The EY Responsible AI Pulse Survey uncovers an interesting paradox when it comes to AI adoption in Ireland. Organisations express confidence in AI’s transformative potential, yet they struggle with resource constraints that hinder implementation and face challenges in identifying practical use cases.

Concerns about reliability, biases, and compliance challenges remain despite the positive sentiment. Businesses recognise the importance of addressing risks, particularly in cybersecurity and ethics, while adopting a cautious yet optimistic approach to AI integration.

They are also committed to developing AI training programmes for employees. This reflects high levels of awareness of the need for AI literacy, driven both by regulatory demands and a recognition of the importance of responsible usage. The advancement of AI is a sustained journey.

Hands typing on a laptop with colorful motion blur effects.
1

Chapter 1

Confidence high in AI adoption, but resource constraints impact scalability

Irish organisations are integrating AI, but they must balance enthusiasm with ensuring safe, effective adoption for lasting value.

All organisations surveyed have either adopted AI to some extent or plan to do so. However, majority of them (60%) say it is challenging to prioritise AI use cases, and just over half say resource constraints are limiting their ability to adopt and scale AI.

This apparent contradiction may be explained by the increasingly widespread use of large language models (LLMs) that are rapidly becoming part of our everyday life. These are frequently adopted for general use without a specific use case in mind, eliminating the need for specialist resources. It may also be explained by difficulties encountered in moving AI projects from proof-of-concept stage to full implementation.

Our survey gathered insights from 35 senior executives from organisations based in Ireland, each with annual revenues exceeding $1 billion. While these respondents may not represent every business in Ireland, their views carry weight. The insights provided are important and offer a valuable snapshot of the thinking at the top. They may also signal where the wider market is headed, and which strategies are likely to gain traction next.

According to our survey, many C-suite leaders are unaware of AI’s true potential

Some 26% of respondents to the Irish survey say AI solutions are fully integrated and scaled across the organisation. This is broadly in line with the 31% of respondents to the recently published EY Responsible AI Pulse survey of global C-suite executives who said the same thing. However, there may be differing interpretations of what “fully integrated” AI truly involves. Achieving this requires prioritising use cases that help with value creation, reengineer business processes and models, and invest in data readiness, knowledge systems and talent development.

The enthusiasm for AI adoption is evident though. According to our survey, 63% believe that it is a great idea for organisations to use AI to automate routine tasks in the workplace with the same number saying AI makes it easier to do tasks that would have traditionally needed technical skills.

Eoin O'Reilly
The finding that every organisation surveyed has either fully integrated AI or is in the process of doing so is quite striking. This indicates a high degree of recognition among this cohort for the transformative potential of the technology. However, organisations need to act with caution and ensure they have put in place the right conditions to ensure the technology is adopted appropriately and safely and delivers long term value.
Motion blur of people in a modern space with dynamic light trails.
2

Chapter 2

Mixed picture on AI controllability and bias

Organisations are boosting AI controls for reliability and bias yet worries persist over trust and oversight amid evolving regulatory and sustainability landscapes.

Concerns in relation to the potential of LLMs and other AI systems to “hallucinate” or produce unreliable outputs are evident in the very high proportion of respondent organisations taking steps to address these issues.

The confidence exhibited in these controls may indicate a degree of complacency regarding the associated risks.

On the other hand, nearly half of the respondents are worried about AI becoming uncontrollable without adequate human oversight.

AI has the potential to significantly enhance efficiency and facilitate improved decision-making. However, it is crucial to maintain control over the systems as their output cannot be always relied upon. With appropriate safeguards and human oversight, AI can be used effectively to support human decision-making rather than replacing it.

Sustainability concerns: As organisations grapple with the challenges posed by AI, they must also consider the environmental implications. Despite the extremely high energy consumption associated with AI systems, just under a quarter (23%) of respondents viewed environmental costs as a serious cause for concern. This may be related to the fact that more than three-quarters of organisations report having controls in place to mitigate the impact of AI on sustainability. It is also likely linked to the reduced focus on sustainability on the part of many organisations across the world of late as well as to recent regulatory changes at EU level.

Modern office with blurred motion and vibrant lighting effects.
3

Chapter 3

Regulatory compliance high, yet some concern evident

Failure to comply with AI policies is a key concern that highlights the urgent need for responsible development and deployment of AI technologies.

Attitudes towards the regulatory environment for AI could be described as reluctant acceptance. While organisations demonstrate a commitment to compliance, there is a lack of enthusiastic support for the regulations governing AI. This may indicate some regulation and compliance fatigue.

Failing to comply with internal AI policies and relevant government regulations was rated as a significant concern by 31% of respondents, highlighting a potential vulnerability that organisations must address. This underscores the necessity of not only meeting regulatory requirements but also of cultivating an ethical and responsible approach to AI development and deployment.

A majority (77%) say they have systems in place to ensure the use of data in AI systems is consistent with permitted rights and confidentiality, while the same number have controls to ensure AI systems operate in adherence to laws, regulations, and professional standards.

In addition, just over half conduct regulatory compliance assessments. It is likely that these systems and controls involve high levels of human oversight reflecting a continued preference to keep the human in the loop when it comes to ensuring the responsible use of AI.

That preference is reflected in the finding that 66% of Irish respondents are engaged in the development of employee training programmes to build skills, best practices and risk awareness related to emerging AI technologies. This focus on training aligns with the EU AI Act that mandates that organisations ensure their staff possess a sufficient understanding of AI systems. It also emphasises the important role AI literacy plays in allowing organisations to take advantage of AI technologies whilst fostering a culture of responsible AI use.

However, despite these high compliance levels, many organisations are struggling with the balance of new regulatory requirements and speed of AI innovation.

Those views may be coloured by the differing approaches taken to AI regulation in the EU, the US and other jurisdictions. Current regulations have different requirements and are at different maturity levels depending on the jurisdiction concerned. This can be frustrating for organisations working globally. The regulatory landscape is also changing fast and there is a geopolitical element with some countries using regulation to compete with each other in the AI space. It should not be surprising therefore that at least some respondents perceive AI regulation in the US to be more enterprise and innovation friendly than that of Europe.

A winding road illuminated by car lights in a misty landscape.
4

Chapter 4

An evolving risk environment

With new AI models emerging and cyber risks growing, organisations are taking proactive actions to conduct risk assessments for new categories of AI models.

There is a strong awareness of the risks associated with AI as evidenced by 71% of respondents indicating that their organisations have robust methodologies in place for the identification, assessment, and mitigation of risks associated with AI.

The heightened awareness may be largely attributed to the introduction of the EU AI Act. Organisations are taking a proactive stance with 51% saying they conduct risk assessments for new categories of AI models. The same number say they invest in governance frameworks to address the risks and challenges associated with the technology.

As the technology evolves in areas like agentic AI and its capabilities expand at a rapid pace, so too does the risk environment. The technology’s strength and its less predictable nature compared to earlier versions are leading to new and more complex risks. These include concentration risk with major vendors, complicated foundational models, copyright issues, privacy concerns, hallucinations, and more. Traditional IT risk frameworks are not always fit for purpose in this new scenario. New ways of thinking about risk, anchored within a digital trust framework, are required for responsible use of AI and to ensure that humans remain in the loop.

Most respondents to our survey recognise elevated risk levels created by AI deployment and are implementing measures to mitigate these risks. While this is encouraging, it is evident that organisations must prioritise the ethical considerations surrounding AI adoption.

Newer AI models mean greater governance challenges

This likely relates to the rapid pace at which the technology is advancing and pressure to adopt it before the risks are fully understood.

Eoin O'Reilly
Newer AI models such as agentic AI are already here. Others will follow, and recent history suggests they may arrive sooner than assumed. It’s important for organisations to understand how these emerging models will create new risks and AI governance challenges and start identifying ways to address them now,

It is no surprise that many respondents pointed to unpredictable outcomes from emerging technology and the increased complexity of self-improving AI, as key risk areas that need to be addressed. Also high on the list is the increased potential for disinformation and manipulation.

Cyber risks: In a reflection of their overall level of importance to organisations, cybersecurity and privacy emerged as the chief areas of concern for respondents when it comes to AI adoption. 43% were very or extremely concerned about the prospect of a security breach in their AI systems. Failing to protect the privacy of data was a major cause for concern for 37%. Organisations are responding, and a significant majority (71%) have controls in place to protect AI systems from unauthorised access, corruption, or theft.

Blurry orange light streaks create a sense of rapid motion.
5

Chapter 5

Responsible AI is key to building trust

Organisations need to create a culture of ethical awareness and transparency to address issues of trust gaps and bias while building reliability.

As AI technologies continue to evolve, organisations must proactively address ethical considerations to ensure that their systems are not only compliant but also aligned with societal values and human rights. Organisations must move beyond mere compliance and embrace responsible AI practices. This involves creating a culture of ethical awareness, having a diverse team where possible, and actively engaging with the societal implications of AI technologies. By doing so, organisations can develop AI systems that are trustworthy, fair, and beneficial to all stakeholders. Embracing responsible AI practices will not only enhance compliance but also build public trust and drive sustainable innovation in the AI landscape.

Respondents to our survey exhibit high confidence in the safety and performance of AI systems within their organisations. Interestingly, 80% reported having moderate to strong controls in place to ensure AI systems perform at a high level of precision and consistency, while 77% said they have controls to ensure AI use is consistent with permitted rights and confidentiality. This reflects the capabilities of the surveyed organisations, all with annual revenues exceeding $1 billion that enables them to invest in such controls.

In another encouraging finding, 46% of the organisations have an established AI ethics policy in place.

However, there is mounting evidence globally of an emerging trust gap in relation to the technology and the uses to which it is being put. Consumers may be accepting of the technology and even enthusiastic in relation to some of its applications, but they are still concerned about some aspects of its usage and impacts.

Nearly two in three respondents (63%) to the global EY Responsible AI Pulse survey think their organisations are well aligned with consumers on their perceptions and use of AI. Yet, the findings from the 15-country1 EY AI Sentiment Index survey of 15,060 consumers found that this was not the case. Consumers are twice as likely to worry that companies will fail to uphold RAI principles than CxOs. This includes concerns around the degree to which organisations fail to hold themselves accountable for negative AI use (58% consumers vs 23% executives) as well as organisations not complying with AI policies and regulations (52% consumers vs 23% executives).

Interestingly, Irish executives demonstrate some awareness of this trust gap, at least to a certain extent. Less than half (43%) of the respondents to the Irish survey said consumers trust companies in their sector to manage AI in a way that aligns to their best interests.

In an increasingly polarised world, trust and transparency are paramount and can be a source of competitive advantage. However, it is clear that consumers lack trust in companies to act responsibly with AI. But companies can change this perception and gain an advantage in the market by developing and embedding responsible AI practices and communicating these to customers.

In this context, it must be understood that responsible AI is about more than just compliance. It’s about organisations building and maintaining trust with their most important stakeholders, their customers, their employees, their regulators, their investors and everyone within the ecosystems they operate.

This is more important than ever as concerns about reliability, bias, and compliance persist. In this context, organisations must develop their own comprehensive Responsible AI (RAI) governance frameworks. These frameworks should go beyond principles and clearly outline to employees what steps they should consider in order to implement AI responsibly.

As our survey has found, all companies are looking to adopt AI. Undoubtedly how quickly and effectively they can do so will be important, but the long-term benefits of adopting responsible AI frameworks, principles and practices should not be overlooked.

Organisations will need to go further than reviewing systems for legal compliance if they are to build trust and confidence in AI. This will require continuous education of consumers and senior leadership, including the board, on the risks associated with AI technologies and how the organisation can respond with effective governance and controls. In this respect, responsible AI plays a vital role in facilitating adoption and generating long-term value,

How CxOs can drive responsible AI behaviour

About the Survey

The research is aimed to evaluate how enterprises perceive and integrate responsible AI practices into their business models, decision-making processes and innovation strategies. The survey, commissioned by EY Global, was intended to better understand C-suite views around responsible AI – for the current and next wave of AI technologies. All respondents had some level of responsibility for AI within their organisation.

The insights were gathered in March and April 2025 in a survey conducted among 35 CxOs in Ireland that included chief executive officers, chief financial officers, chief human resource officers, chief information officers, chief technology officers, chief marketing officers and chief risk officers. The organisations that participated in the survey have an annual revenue of more than US$1 billion.


Summary

The results of our survey indicate that AI integration is now happening at pace with all respondent organisations having adopted the technology to some extent or planning to do so. Organisations therefore need to be mindful of the evolving risk environment and address the challenges it presents by putting in place appropriate safeguards and human oversight. They also need to develop a culture where regulatory requirements are positively embraced rather than seen a barrier to innovation. That culture will also promote the adoption of responsible AI practices which not only helps build trust both internally and with external stakeholders but can be a source of competitive advantage in the long run.


Related articles

How late-stage deals are driving rapid growth in VC investment in GenAI

Exceeding $49 billion in H1 of 2025, VC investment in GenAI has surpassed the total for all of 2024 and growth is likely to continue. Find out how.

How AI can enhance B2B sales

Find out how AI models helped enhance B2B sales for a telecom company.

    About this article