ANI systems have become popular with companies, governments and entrepreneurs who are faced with a growing corpus of digital data waiting to be exploited. However, in pursuing ANI’s productivity and efficiency benefits, these stakeholders must consider the risks stemming from ANI’s shortcomings and the potential for unintentional human cost.
The most common criticisms of ANI include the algorithm’s inability to reason beyond its training data and it’s propensity to propagate inherent human biases as it learns from human generated data. While no technology is devoid of flaws, the cost of an error stemming from ANI’s drawbacks can have serious consequences especially in situations where the algorithm’s decision can substantially influence an individual’s fate.
In some cases, algorithmic errors are at worst inconvenient. For example, although digital voice assistants have made a faux pas or two resulting in awkward or unsettling moments for users, adoption and usage continues to soar. On the other hand, in high-profile public-facing contexts, algorithmic errors had catastrophic results and eroded the public’s trust. For example, recent fatalities involving self-driving cars dampened enthusiasm and led to a significant erosion of consumer confidence – a study conducted in 2018 found that 73% of US drivers would not trust a fully autonomous vehicle, compared to 63% in 2017.
As ANI-driven decision-making finds its way into other critical domains such as criminal justice, education and job recruitment, the price of a mistake has resulted in false arrests, racial bias, and gender discrimination. If the incidence of such errors increases, it could ultimately lead to a loss of trust in the technology entirely and leave this class of ANI applications vulnerable to a potential “winter”.
This is not to suggest that the entire field of ANI will falter. As Stefan Heck, co-founder and CEO of Nauto and EYQ Fellow suggests, “Perhaps we need another category between ANI and AGI to account for circumstances where failures could result in societal backlash.”
Definitions of AI and its various flavors have traditionally centered on the technology’s capability to mimic or surpass human physical and cognitive capabilities. While this framework has served to benchmark the technology’s evolution, it does not adequately reflect the risk profiles of algorithms when applied in different contexts.
How risky is your AI?
The framework below offers businesses and governments a way to categorize and classify their current and future AI applications.