Any investment in compliance technology should include safeguards to make sure the technology is working properly and effectively. For example, machine learning can help a company detect fraud patterns in sales transactions or flag problematic vendors, but using biased or insufficient data could result in false positives.
Criminals are also leveraging AI. AI systems can develop sophisticated malware, learn from unsuccessful attacks and create more believable phishing campaigns. Companies using AI and automation to detect and respond to these attacks discover data breaches much more quickly than those that don’t, reducing their cost of a breach by nearly US$2 million, according to IBM research.8 Despite this, only half of the organizations studied planned on increasing security investment after a breach.
Organizations should also consider spending more on internal controls, which restrict data access and provide accountability. Inadequate controls are the highest-ranked internal risk reported in the 2023 EY-commissioned EI Studios survey. Embedding workflows with built-in controls reduces mistakes and fraud, creating digitized compliance scorecards that provide detailed insights into key risk areas.
Advanced data analysis also enables an enterprise to integrate risk management with strategy and performance management. For example, a comprehensive analysis of risk exposure for emerging scenarios helps a board determine whether its strategies and business models are viable, as described in the EY Global Board Risk Survey 2023, which found that highly resilient boards leverage data and technology effectively to detect risks early and improve decision-making.
Developing a sustainable data and technology strategy that aligns with core values
The responsible use of technology may not always be a strategic priority for many companies. Nearly half of respondents in the 2023 EY-commissioned EI Studios survey reported their organization lacks a corporate strategy for data privacy, which is well-regulated in most jurisdictions and requires sound data governance.
Organizations need to develop a comprehensive strategy and vision for managing technology and data ethically, just as many companies have done with their sustainability agenda. But progress in this area is alarming. Less than one-third of board directors believe their oversight of the risks arising from digital transformation is very effective, according to the EY Global Board Risk Survey (GBRS) 2023.
A mission statement is essential for showing how an enterprise manages technology and data in an appropriate and defensible way that aligns with their core values. For example, Adobe has clearly communicated its commitment to advancing the responsible use of technology for the good of society. Its AI Ethics Principles describes the actions the software maker is taking to avoid harmful AI bias and align its work with its values.9
Microsoft’s approach to creating responsible and trustworthy AI is guided by both ethical and accountability perspectives.10 It calls on technology developers to establish internal review bodies to provide oversight and guidance so that their AI systems are inclusive, reliable, fair, accountable, transparent, private and secure.
Ethical use of technology isn’t possible without fostering a culture where integrity is just as important as profits. For example, Volkswagen Group states that integrity and compliance have the same strategic and operational priority as sales revenue, profit, product quality and employer attractiveness.11
The average cost of a data breach grew to nearly US$4.5 million in 2023, according to an IBM study.12 Regulatory fines are also on the increase, with Meta hit with a €1.2 billion sanction for GDPR violations.13
Organizations looking to create an ethical and sustainable strategy for technology and data use can adapt measures used for other sustainability initiatives, such as environmental protection and good governance. This includes setting targets and budgets, measuring performance and reporting progress publicly. Robust sustainability efforts can go a long way toward addressing stakeholder concerns and even attracting job applicants.
Some sustainability activities such as climate action are already moving from voluntary commitments into compliance as regulators set disclosure requirements for public companies.14 Corporate strategy and principles for ethical technology are expected to have the same focus as that of the sustainability agenda if they don’t already.
Ensuring confidence in AI with a robust governance approach is one of five strategic initiatives EY teams recommends for organizations looking to maximize AI’s potential while meeting its challenges. This approach includes:
- Establishing an AI council or committee along with ethical principles to guide policies and procedures
- Tracking all relevant existing regulations and ensure any new use cases comply
- Defining controls to address emerging risks
- Preparing for pending legislation
Prioritizing the ethical use of AI and other emergent technologies means leaders must be careful not to fall into the “say-do” gap in which they pay lip-service to doing the right thing. This gap was clearly apparent in the EY Global Integrity Report 2022 in which 58% of board members said they would be very or fairly concerned if their decisions were subject to public scrutiny and 42% reported their company is willing to tolerate unethical behavior from high or senior performers.
Rise in GenAI brings new opportunities and risks
Imagine two doorways – one labeled “technology opportunity,” the other “technology risk.” Which door do you open first? Which is more important to your organization? What blocks your path and who may be nipping at your heels?
GenAI has made it more difficult than ever to balance opportunity and risk in adopting technology. Its widespread adoption in 2023 raised awareness of the potential for all types of AI, along with shortcomings. The public wants to know how AI can be prevented from creating false information, biased results and taking their jobs.
Large language models (LLMs) like ChatGPT are becoming a game changer for legal and compliance functions with their ability to analyze and summarize vast numbers of documents. But professionals well-versed in privacy and cybersecurity risks may struggle to assess new threats stemming from AI. We’ve already seen lawyers cite cases invented by AI, making it essential that outputs be validated by other intelligent tools and/or people
Organizations that seek to reduce GenAI risks by prohibiting its use may see this strategy backfire. More than a quarter of employees responding to an online Reuters-Ipsos poll in July 2023 said they regularly used OpenAI ChatGPT at work, even though only 22% of those users said their employers explicitly allowed it.15 Limiting employees to company-approved GenAI tools may result in workarounds, making it critical to develop policies, standards, and procedures no matter how AI is accessed throughout an organization.
Even companies that authorize GenAI usage may not have a full picture of how it’s being deployed and the accompanying risks. More than half of AI failures come from “shadow AI” or third-party tools, which are used by 78% of organizations globally, according to MIT research.16
Companies looking at GenAI investments must focus on the problems they’re trying to solve and the role data will play. Does the organization have the required data? Does it understand how the data was generated, its limitations, and what it represents? Can the data be used to create LLMs? A lack of good data governance can cause a host of risks – from biased outcomes to data breaches. Even if a company gets all this right, there’s often a breakdown in communicating actions in a form that leadership, investors, employees and other stakeholders understand.
The reality is no matter which door you open first, you’re bound to end up in the same room. Emergent technologies with game-changing potential will always be intertwined with a bevy of legal and reputational risks that must be addressed strategically.