Thanks to digital, human decision-making can be replicated in a fraction of a second. But do you understand the ethics of the algorithm?
Artificial intelligence (AI) has come to occupy center stage in the world of business: it is at the heart of digital transformation and business management.
It has also become the new front line of companies’ ability to defend themselves from external and internal threats. As such, it should be a central pillar in a company’s risk management strategy.
But what are the “ethics of the algorithm,” the compliance risks, that companies need to consider as they deepen their dive into digital transformation?
The quandaries of data management
At the root of AI is a new generation of data management. Data is a commodity that can be bought, sold, rented, hired, borrowed, stolen and disposed of. Never has the ownership of data been more valuable than it is now.
However, AI also raises a multitude of complex questions: Who owns the data, hosts it, collects it, “harvests” it and benefits from it? How can it be safely stored, exchanged, transferred and disposed of? To whom does data really belong, and how is ownership transferred among stakeholders?
Underlying AI is the algorithm, the building block of information technology (IT) systems, acting as the set of instructions for performing calculations, data processing, automated reasoning and also, other tasks. The result or output can be used on its own or used as an input into another algorithm — meaning that AI is usually made up of strings of algorithms.
With computing speeds reaching new heights, algorithms are capable of replicating human decision-making in a fraction of a second. If the observed human behaviors that dictate how an algorithm transforms input into output are flawed, it risks setting in motion processes, the outcomes of which may not be the ones intended. With some algorithms even able to create their own algorithms through “self-learning,” the risk of unforeseen and potentially harmful outcomes increases exponentially.