For many years, machine learning has successfully detected credit card fraud. Banks use systems that have been trained on historical payments data to monitor payments for potential fraudulent activity and block suspicious transactions. Financial institutions also use automated systems to monitor their traders by linking trading information with other behavioral information such as email traffic, calendar items, office building check-in and check-out times, and even telephone calls.
AI-based analytics platforms can manage supplier risk by integrating a host of different information about suppliers, from their geographical and geopolitical environments through to their financial risk, sustainability and corporate social responsibility scores.
Finally, AI systems can be trained to detect, monitor and repel cyber attacks. They identify software with certain distinguishing features – for example, a tendency to consume a large amount of processing power or transmit a lot of data – and then close down the attack.
Risks related to AI adoption
Despite these benefits, AI is also a source of significant new risks that must be managed. So it is important that the risks are identified that relate to each individual AI application and to each business unit that uses it.
Some of the main risks associated with AI include:
- Algorithmic bias: Machine-learning algorithms identify patterns in data and codify them in predictions, rules and decisions. If those patterns reflect some existing bias, the algorithms are likely to amplify that bias and may produce outcomes that reinforce existing patterns of discrimination.
- Overestimating the capabilities of AI: Since AI systems do not understand the tasks they perform, and rely on their training data, they are far from infallible. The reliability of their outcomes can be jeopardized if the input data is biased, incomplete or of poor quality.
- Programmatic errors: Where errors exist, algorithms may not perform as expected and may deliver misleading results that have serious consequences.
- Risk of cyber attacks: Hackers who want to steal personal data or confidential information about a company are increasingly likely to target AI systems.
- Legal risks and liabilities: At present, there is little legislation governing AI, but that is set to change. Systems that analyze large volumes of consumer data may not comply with existing and imminent data privacy regulations, especially the EU’s General Data Protection Regulation.
- Reputational risks: AI systems handle large amounts of sensitive data and make critical decisions about individuals in a range of areas including credit, education, employment and health care. So any system that is biased, error-prone, hacked or used for unethical purposes poses significant reputational risks to the organization that owns it.