7 minute read 18 Dec 2019
Robotic arms picking up tomatoes

Why it’s important to confront data integrity risks in the digital age

By Todd Marlin

EY Global Forensic & Integrity Services Technology & Innovation Leader

Global leader in technology & Innovation, with significant experience serving the financial services industry.

7 minute read 18 Dec 2019

Companies face challenges posed by a fast-changing patchwork of data privacy regulations, while data quality also looms as a related issue.

Companies are increasingly reliant on data and leading-edge technologies, such as artificial intelligence (AI), robotic process automation (RPA) and cloud infrastructure, to gain business insights, achieve operational efficiency and seek new growth opportunities. But within those opportunities lurk possible risks.

Technology is ultimately programmed and controlled by humans who are susceptible to making intentional or unintentional errors and omissions. The ways that technology is implemented can be an opaque subject to business owners lacking the requisite technical know-how, and this gap in understanding can present a range of risks that could cause significant reputational and financial harm to the business.

Adding to these risks is the increase in data protection and privacy regulations around the world, such as the European Union’s General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA) and the New York State Department of Financial Services Cybersecurity Regulations. All of these carry stringent requirements as to how data should be secured, accessed, stored and used. And their noncompliance creates the risk of hefty fines, reputational damage and potential disruption to the business.

With this as a backdrop, EY teams and the Financial Times hosted an executive gathering in New York City in 2019 as a part of a global six-event series called “Enhancing corporate integrity.” Our group met during a period when corporate integrity, or the lack of it, was dominating the headlines.

As more and more companies suffer data breaches, fall victim to fraud or are found to have abused customers’ expectations for data privacy, the public’s trust that businesses can effectively police themselves becomes low. Adding to this situation is the growing temptation for politicians around the world to impose their own oversights.

Safeguarding the integrity of data

In the gathering, we began with two questions:

  • Who felt confident that they knew what data was in their organization?
  • Who felt on top of the risks associated with it?

Not a single hand went up in response to either of the questions.

Conducting an inventory of the data a company has, even in its own systems, can seem like an almost impossible task. And the broad spectrum of risks associated with that data is constantly shifting as cyber attackers become more sophisticated, regulators set new expectations and technologies, such as AI and the cloud, change the scope of how digital information can be used and abused.

Much of our time was spent discussing the challenges posed by a fast-changing patchwork of data privacy regulations. In the US, CCPA has set the pace domestically, and Washington state is expected to soon follow. But many representatives of US-based companies said they were already following the GDPR, which came into force in 2018 — at home and in other jurisdictions.

One guest noted that even a nation-state, as small as Singapore, can set ground rules on AI for multinationals’ operations on the other side of the world. There was a strong desire to see more global coordination on data regulation – and an expectation that we will move in that direction.

Another concern was data quality. How much of the information inside a business is helpful? Who should have access to it, and how much should be cleaned out?

Some digital information must be kept for regulatory or litigation purposes, but companies need clear policies and procedures to help them keep only the right data. A decentralized approach that actively engages business owners, while integrating them with centralized technologies and compliance functions, may be part of the answer.

The public increasingly doesn’t trust businesses to effectively police themselves, while temptation for politicians around the world to impose their own oversight is growing. 

AI and ethics

One message that came through clearly was that data comes with temptation: it can be hard to resist the opportunity to use the information gathered for one purpose and apply it for another. But the risk of alienating customers by using their data in ways to which they did not consent is significant.

Data owners will increasingly expect companies to seek their consent over how their data will be used. Therefore, data inherently creates a tension between the ever-expanding ability that technologies such as AI offer to improve products, services and processes, and the ever-increasing risk of pushback from consumers.

The risks AI bring do not stop there. If a company uses AI in hiring decisions, they could inadvertently trigger discrimination laws when biased data sets are used for input or models are flawed. While it is always important to understand how a decision is arrived at when defending certain positions — it is not always easy, or even possible, to trace the decision tree when AI is used.

Finally, while machines can be more reliable than humans in many instances, it is also true that they can fail miserably when they encounter unexpected situations. Bill Hibbard, a machine intelligence scientist, argues that because AI will have such a profound effect on humanity, AI developers too are considered the representatives of future humanity. And so, they also have an ethical obligation to be transparent in their efforts.

Are we reaching the limits of what we can do with data?

With both regulators and consumers struggling to keep up with the pace of technological change – and also, with the crooks and bad actors often one step ahead of them – there is a need for the industry to come together to understand and manage the data integrity risks.

Having begun the evening with a stark picture of the challenge involved in managing data integrity risks, we ended on a more optimistic note by recognizing that collaborating transparently across industries and with the government can at least help find agreement on the way forward to manage these risks effectively.

This article was first published on FT BrandSuite.

Summary

Data inherently creates a tension between the ever-expanding ability that technologies such as AI offer to improve products, services and processes, and the ever-increasing risk of pushback from consumers. Data integrity, therefore, must be on your agenda.

About this article

By Todd Marlin

EY Global Forensic & Integrity Services Technology & Innovation Leader

Global leader in technology & Innovation, with significant experience serving the financial services industry.