With the deliberative phase of AI regulation coming to an end, policymakers are now clearly moving beyond principles and toward practice. The COVID-19 pandemic has not slowed this legislative agenda, rather it is providing fresh impetus as AI-powered public health applications multiply. This means the window allowing companies to voluntarily align with the emerging governance framework is closing quickly. Business-as-normal risks a larger adjustment burden when stricter governance arrives soon.
Mitigating these risks before they materialize requires leadership commitment to building trust; AI applications deployed must reflect company values. AI should engender not endanger an individual’s autonomy, defend not desecrate human rights and promote rather than imperil social well-being.
Embracing ethical AI
As firms realize the benefits of AI by bringing more products and services to the market, firmer governance will aid in risk management. Appropriate regulations can keep AI safe and secure without violating individuals’ privacy or fundamental freedoms. Better rules will keep businesses fully accountable for the results of their AI innovations. And stronger safeguards will help ensure Trusted AI deployed in the real world is fair, transparent and explainable. This will increase public trust and, therefore, consumer adoption, unlocking new products and strategies more quickly, and turbocharging the use of AI in transformation agendas.
The COVID-19 crisis puts ethical AI in focus, making it a central feature of any digital transformation strategy. The first step toward realigning with the emerging ethics is to foster an internal consensus around key principles across divisions, including technology, data, risk, compliance, legal, sales, HR and management. Engaging a broader set of stakeholders, such as customers who may be discriminated against by AI products and services, promotes trust in a volatile social climate. For example, to minimize the risk of algorithmic bias, businesses should actively build diverse and inclusive training data sets that fairly represent vulnerable groups. In the current social climate, the careless deployment of potentially discriminatory algorithms represents a substantial and growing brand risk. Companies might also consider validating their software with external audits of their production processes, data inputs and algorithmic outputs.
Transparency also helps build the trust needed to drive adoption. Companies should convert ethical principles into clear, published guidelines for supplying AI products and services. Lines of accountability should be formalized. Corporate policies and procedures are required to facilitate regular reviews and ongoing risk assessments, and update systems and products accordingly. Management must provide employees with the resources and training required to reinforce these crucial foundations and to continue learning as the environment evolves. Consulting with policymakers to understand how emerging ethical principles will influence AI regulatory developments in their sector helps ensure ongoing alignment.
Weak consumer trust risks slowing the adoption of transformational, even lifesaving, technologies. As the health crisis expands into a social and economic crisis, ethical AI is becoming a prerequisite to building a better working world.