Transforming careers: the shift from specialization to adaptability in AI
The idea of being a “jack of all trades, master of none” has long been discouraged in career advice. However, in the context of AI, specializing in a single task that AI can easily replicate can be detrimental. As we move into the future of work, the demand for lifelong learning and upskilling will significantly increase as AI reshapes job requirements.
Traditional learning and development models must evolve to incorporate AI-driven, adaptive learning systems. According to the World Economic Forum, essential future skills include technological literacy, creative and analytical thinking, resilience, curiosity and leadership. While AI continues to advance, it still struggles with judgment and ethics, lacking critical human abilities such as moral reasoning, intuition and deep contextual understanding — qualities vital for high-stakes decision-making in areas such as legal rulings and ethical medical judgments.
Today’s workers must shift their mindset from relying on familiar business practices to embracing uncertainty. The most valuable employees will be those who:
- Recognize AI’s strengths and limitations, understanding that it offers new perspectives but should not be followed blindly.
- Enhance emotional and ethical intelligence, including creativity, empathy and systems thinking, while cultivating the ability to envision possibilities and ask better questions.
- Leverage AI to create new opportunities rather than merely focusing on efficiency gains.
For example, at an AI summit in Paris in February 2025, European leaders emphasized the need for the European Union to prioritize innovation while ensuring that regulatory red tape does not hinder progress. This shift places the onus on businesses to define their own AI values and governance. We are witnessing a pro-business approach to AI safety, with governments lowering guardrails and reducing or eliminating existing regulations as companies race to capture first-mover advantages. Consequently, companies now bear the responsibility for risk management, requiring them to take greater ownership of responsible AI practices.
In this evolving landscape, cyber risk managers with only a foundational understanding of AI will become increasingly vital. They will need to understand organizational functions, adopt a broad view of controls, and navigate the interplay of risk and internal politics that shape how potential dangers are mitigated while fostering innovation. We can envision the role of chief risk officer evolving into the chief risk, trust and ethics officer, responsible for:
- Determining that AI systems are fair, unbiased and aligned with ethical and regulatory standards
- Developing frameworks to mitigate risks, such as algorithmic bias, misinformation and privacy violations
- Identifying and addressing biases in AI decision-making, particularly in hiring, while relying on third-party audits and applying red-team standards to test agentic systems
- Engaging with regulators and policymakers to verify responsible AI governance