The issue is not just that machines may know things clinicians do not; rather it is that clinical reasoning itself may increasingly emerge from systems that combine human and machine inputs in ways that no single participant can fully understand. This heralds a significant transformation in which we are gradually but progressively shifting from individual expertise to collective intelligence in healthcare.
Medicine has, in reality, been moving in this direction for decades. Complex cases are increasingly discussed in multidisciplinary team meetings. Shared clinical pathways and guideline-driven care have already redistributed authority from individuals to groups and systems of shared knowledge. AI is now accelerating this by enabling new forms of distributed cognition – systems in which insight emerges from networks of clinicians, data, and increasingly sophisticated AI agents. Whereas traditionally the clinician was expected to know the answer, these emerging models may instead require the clinician to participate in orchestrating a system that produces answers collectively.
Even apparently modest applications illustrate this dynamic. Ambient scribes - AI systems which listen to doctor-patient interactions and automatically create clinical notes in real time - are spreading rapidly because they reduce administrative burden. But tools that generate notes may gradually reshape how care is represented. A traditional clinical note reflects what the clinician chooses to document – selective and shaped by experience. An AI-generated note may be more complete and will typically be more standardized, but also more aligned with what can be coded, measured and reimbursed. The gain is efficiency and consistency; the risk is a subtle shift away from nuance, uncertainty and the relational aspects of care.
Clinicians will increasingly work within systems where knowledge is distributed. While this will not change the importance of clinical judgement, it will very likely change how judgement is exercised. The clinician’s role will increasingly be to interpret outputs, resolve conflicts between sources, and decide when to trust or overrule algorithmic recommendations. Judgement will become less about possessing information and more about integrating it responsibly and ensuring that decisions remain aligned with patient values and ethical commitments.
At the same time, integration of AI into health introduces new forms of professional tension. AI systems can support decisions but they can also monitor them. Tools which guide care may also monitor when clinicians deviate from recommendations. As others have noted, these developments could transform clinicians into ‘quantified workers’, operating within systems in which they are continuously evaluated, simultaneously assisting and constraining professional autonomy.
There is also a more subtle risk. Clinical intuition – built through repeated exposure, uncertainty and reflection – depends on active engagement in decision making. If clinicians increasingly defer to algorithmic recommendations, they will have fewer opportunities to exercise and define these tacit skills, with consequences that may only become visible over time.
In the near term, adoption of AI will be fastest where it reduces workload – documentation, summarisation, administrative tasks. But once embedded, these systems will shape more than efficiency. They will influence how decisions are framed, how knowledge is represented and how clinical authority is exercised. While AI may not replace clinicians, it is likely to fundamentally reshape the way in which expertise is medicine is produced and used. With this in mind, there are several priorities to be considered:
- Health systems must treat AI as being about much more than just technology. Deploying AI tools without redesigning workflows, governance structures and accountability frameworks risks creating systems that clinicians will neither fully trust nor understand. Health systems will need mechanisms for evaluating algorithm performance in real world settings, monitoring unintended consequences, and ensuring transparency about how AI tools influence clinical decisions.
- Academic medicine must rethink how clinicians are trained. Traditional medical education has focused on acquisition of biomedical knowledge and clinical reasoning. In an AI-enabled environment, clinicians will need fluency in concepts like algorithmic bias, model validation, data provenance, and the limits of predictive systems. This is not about turning clinicians into data scientists, but it is about ensuring they can critically evaluate the tools they are asked to use.
- Professional bodies and regulators must clarify the boundaries of responsibility in AI-mediated care. When decisions involve AI, questions are inevitably going to arise about accountability; who is responsible when a system fails and what level of understanding should clinicians be reasonably expected to have about the tools they use? Establishing clear standards for transparency, validation and clinical oversight will be essential to building and maintaining trust in AI.
- Clinicians themselves must engage actively in the design and governance of AI systems. Too often, digital technologies in healthcare have been implemented without meaningful clinical input, leading to tools that disrupt rather than support clinical work. If AI is going to enhance rather than erode clinical practice, then clinicians must be involved in shaping how those systems are evaluated, deployed and monitored.
- Finally, medicine – and healthcare more broadly – must invest in new forms of multidisciplinary collaboration. The future of healthcare will involve teams that include clinicians, data scientists, engineers, ethicists and, increasingly, autonomous AI systems, working together. Building health systems that support this kind of collective intelligence will require new professional roles, new organisational structures, and new norms around shared expertise.