Regulators
Historically regulation has lagged the introduction and adoption of transformational technologies. This has resulted in negative social, political and economic outcomes. The risks this next generation of technologies presents is even more significant than those in the past. Immediate threats such as data privacy and security are already apparent. The broader threat of substantial job loss, that reinforces the disadvantage gap, particularly amongst low skilled workers will result in social and political ramifications. There are also significant political threats that were highlighted through recent US elections.
These threats have been loudly promoted by tech leaders and jurisdictions are responding. The EU has passed an Artificial Intelligence Act which includes approval processes for new technologies before they can be taken to market and the limiting the use of facial recognition technologies.
Managing the direct risks of generative technologies through layers of regulation may be the immediate focus but as important is the consideration of how these technologies challenge broader regulatory frameworks. In the same way the adoption of electric vehicles is challenging fuel taxes and excise, generative technologies will over time challenge all aspects of our regulatory and legislative environment from taxation to labour, communication to pension systems, roads to privacy, health to finances.
Organisations
Generative technologies present the opportunity for organisations to drive productivity and innovation, but like every other disruptive technology they also present the risk of loss of competitive advantage, change resistance, and workforce disruption. Translating a risk-based approach at the organisational level makes sense, but it must not create a level of conservatism that compromises innovation or delays transformation.
The starting point to agree an ethical framework for the organisation that will act as a touchstone to how AI is adopted and deployed. EY’s responsible AI principles offer a starting point.
EY responsible AI principles:
- Accountability: unambiguous ownership over AI systems, their impacts and resulting outputs across the AI lifecycle
- Data Protection: Use of data in AI systems is consistent with permitted rights, maintains confidentiality of business and personal information and reflects ethical norms.
- Reliability: AI systems are aligned with stakeholder expectations and continually perform at a desired level of precision and consistency.
- Security: AI systems, their input, and output data are secured from unauthorized access, and resilient against corruption and adversarial attack
- Transparency: Appropriate levels of disclosure regarding the purpose, design and impact of AI systems is provided so that stakeholders, including end users, can understand, evaluate and correctly employ AI systems and their outputs.
- Explainability: Appropriate levels of explanation are enabled so that the decision criteria and output of AI systems can be reasonably understood, challenged and validated by human operators.
- Fairness: The needs of all impacted stakeholders are assessed with respect to the design and use of AI systems and their outputs to promote a positive and inclusive societal impact.
- Compliance: Ensures the design, implementation and use of AI systems and their outputs comply with relevant laws, regulations and professional standards.
- Sustainability: Considerations of the impacts of technology are embedded throughout the AI lifecycle to promote physical, social, economic and planetary well-being.
AI generative technologies are moving quickly. To keep up organisations must closely monitor their evolution and continually update strategy to capture emerging opportunities.
Priority should be given to:
- Engaging and educating leaders in the development and application of AI
- Establishing systems and processes to stay abreast of developments and to make decisions about their application
- Reviewing the operating model to ensure a fit to an AI future
- Educating and engaging people in the organisation’s visions for AI technologies
- Investing in new skills to accelerate productivity through and AI people partnership
- Testing and learning, with the immediate opportunity being the managed use of language models and the evolving ecosystem of consumer facing AI chatbots and AI powered tools.
Over time it will be necessary for organisations to rethink the total enterprise to unlock the full potential of AI.
The implications for people and jobs
For now, you are not going to lose your job to AI, but it will definitely change your job and you might lose it to someone who can use AI better than you.
Comfort is taken in the narrative that machines will not replace the human ability to empathise, contextualise, innovate, apply moral reasoning, understand causality, observe, and interpret, or to be creative, kind, or resourceful. Or to negotiate, resolve conflict, and communicate through dialogue. These innate human characteristics are thought to protect us against total obsolescence.
Research also indicates a human bias against machines. Our preference is to interact with another person, which may preserve the role of people as the human interface. The question is - will this be sustained as people become more comfortable with interacting with machines and machines become more efficient, effective, and trustworthy.
The overall impact on jobs cannot be predicted, forecasts range widely from the dystopian view of total job destruction to a positive outlook where technology drives unprecedented productivity and innovation, leading to the creation of new, more interesting, and diverse jobs.
The impact will inevitably come in waves, automation and augmentation of tasks already touches all areas of work. Initially these have been more routine activities but over time more sophisticated technology will shift up the skill ladder to higher order tasks with nine out of ten new jobs predicted to require post school qualifications.
Job redundancy is a reality and in the immediate future is likely to be concentrated in low skilled or routine jobs that are more easily automated. Autonomous trucks are here and could replace 4 million driving jobs in the US alone. At the same time fully automated warehouses stand to replace 1.4 million warehousing jobs and 3.6 million cashiers are at risk of job loss due to AI.
The World Economic Forum predicts structural growth of 69 million jobs and the decline of 83 million resulting in a net loss of 14 million jobs over the next four years (WEF 2023). This churn highlights the challenge of preparing people for newly created jobs and managing the impact of job loss. There are already many examples of where organisations are reshaping their workforces in response to AI generative technologies.
IBM is reducing its recruitment intake by 7,800 for mainly back-office roles, British Telecom has announced it is cutting 30,000 jobs by 2030, 10,000 of which will be replaced by AI and in Australia, a new digital radio start-up announced it has hired a veteran newsreader and an AI robot as its news team.
Some of the more immediate people challenges to include:
- Promoting awareness of technology driven change and its impact on work
- Culturally aligning people to embrace change, collaborate, innovate, and effectively partner with technology
- Engaging people in skilling and reskilling in line with changing capability requirements
- Establishing systems to dynamically forecast and allocate resources against capability and capacity requirements
- Redefining the role and entry point for graduates as data analysis, document review, research and simpler writing tasks are automated