Brainwave scanning research on computer screens in lab

What does AI in Healthcare mean for clinical judgement and expertise?

Related topics

As AI becomes embedded in care, medical expertise increasingly emerges from shared systems, influencing how clinicians judge, decide, and remain accountable for clinical care and healthcare outcomes.


In brief

  • Artificial intelligence is already shaping clinical practice and influencing how clinical decisions get made.
  • Clinicians will increasingly work within systems where knowledge is distributed, changing how clinical judgement is exercised. 

  • Health systems - and those working within them - need clear training, oversight, and shared responsibility to support safe and trusted use of AI in care.


From individual expertise to collective intelligence: medicine in the age of AI

For centuries, medicine has been organised around the authority of the individual expert. The physician’s role has always been to interpret symptoms, collate evidence, and decide what should happen next. Even as medicine became more specialised and evidence based, the structure of care remained largely unchanged: a patient, a clinician, and the clinician’s judgment. 

 

Artificial intelligence is beginning to challenge that model, not simply by adding new tools, but by changing how clinical knowledge itself is produced.  

 

AI systems have moved rapidly from experimentation into everyday clinical workflows. Algorithms are now being used to triage patients, summarise records, generate documentation, recommend diagnoses, and predict outcomes. A 2026 report from the American Medical Association reported that 72% of clinicians are using AI in clinical practice; up from 38% in 2023. These tools are no longer peripheral decision aids. They are increasingly participating in the reasoning processes that shape clinical care.  

 

While much of the discussion to date has focused on whether AI algorithms can match or exceed clinicians in specific tasks, the implications of this shift go far beyond performance. The deeper issue emerging in the literature is epistemological: clinicians are increasingly being asked to rely on systems whose reasoning they cannot fully explain. 

 

For a profession grounded in a culture where decisions must be justified through physiology, evidence and experience, this creates an uncomfortable tension. If an algorithm outperforms clinicians, then ignoring it may harm patients. But relying on it without understanding how it reached its conclusion challenges the basis of clinical responsibility. This has been described as the ‘black box’ problem for AI in healthcare and it raises questions about what it will mean to exercise clinical judgment in such situations.

The issue is not just that machines may know things clinicians do not; rather it is that clinical reasoning itself may increasingly emerge from systems that combine human and machine inputs in ways that no single participant can fully understand. This heralds a significant transformation in which we are gradually but progressively shifting from individual expertise to collective intelligence in healthcare. 

Medicine has, in reality, been moving in this direction for decades. Complex cases are increasingly discussed in multidisciplinary team meetings. Shared clinical pathways and guideline-driven care have already redistributed authority from individuals to groups and systems of shared knowledge. AI is now accelerating this by enabling new forms of distributed cognition – systems in which insight emerges from networks of clinicians, data, and increasingly sophisticated AI agents. Whereas traditionally the clinician was expected to know the answer, these emerging models may instead require the clinician to participate in orchestrating a system that produces answers collectively. 

Even apparently modest applications illustrate this dynamic. Ambient scribes - AI systems which listen to doctor-patient interactions and automatically create clinical notes in real time - are spreading rapidly because they reduce administrative burden. But tools that generate notes may gradually reshape how care is represented. A traditional clinical note reflects what the clinician chooses to document – selective and shaped by experience. An AI-generated note may be more complete and will typically be more standardized, but also more aligned with what can be coded, measured and reimbursed. The gain is efficiency and consistency; the risk is a subtle shift away from nuance, uncertainty and the relational aspects of care. 

Clinicians will increasingly work within systems where knowledge is distributed. While this will not change the importance of clinical judgement, it will very likely change how judgement is exercised. The clinician’s role will increasingly be to interpret outputs, resolve conflicts between sources, and decide when to trust or overrule algorithmic recommendations. Judgement will become less about possessing information and more about integrating it responsibly and ensuring that decisions remain aligned with patient values and ethical commitments. 

At the same time, integration of AI into health introduces new forms of professional tension. AI systems can support decisions but they can also monitor them. Tools which guide care may also monitor when clinicians deviate from recommendations. As others have noted, these developments could transform clinicians into ‘quantified workers’, operating within systems in which they are continuously evaluated, simultaneously assisting and constraining professional autonomy.

There is also a more subtle risk. Clinical intuition – built through repeated exposure, uncertainty and reflection – depends on active engagement in decision making. If clinicians increasingly defer to algorithmic recommendations, they will have fewer opportunities to exercise and define these tacit skills, with consequences that may only become visible over time. 

In the near term, adoption of AI will be fastest where it reduces workload – documentation, summarisation, administrative tasks. But once embedded, these systems will shape more than efficiency. They will influence how decisions are framed, how knowledge is represented and how clinical authority is exercised. While AI may not replace clinicians, it is likely to fundamentally reshape the way in which expertise is medicine is produced and used. With this in mind, there are several priorities to be considered:

  • Health systems must treat AI as being about much more than just technology. Deploying AI tools without redesigning workflows, governance structures and accountability frameworks risks creating systems that clinicians will neither fully trust nor understand. Health systems will need mechanisms for evaluating algorithm performance in real world settings, monitoring unintended consequences, and ensuring transparency about how AI tools influence clinical decisions.
  • Academic medicine must rethink how clinicians are trained. Traditional medical education has focused on acquisition of biomedical knowledge and clinical reasoning. In an AI-enabled environment, clinicians will need fluency in concepts like algorithmic bias, model validation, data provenance, and the limits of predictive systems. This is not about turning clinicians into data scientists, but it is about ensuring they can critically evaluate the tools they are asked to use.
  • Professional bodies and regulators must clarify the boundaries of responsibility in AI-mediated care. When decisions involve AI, questions are inevitably going to arise about accountability; who is responsible when a system fails and what level of understanding should clinicians be reasonably expected to have about the tools they use? Establishing clear standards for transparency, validation and clinical oversight will be essential to building and maintaining trust in AI.
  • Clinicians themselves must engage actively in the design and governance of AI systems. Too often, digital technologies in healthcare have been implemented without meaningful clinical input, leading to tools that disrupt rather than support clinical work. If AI is going to enhance rather than erode clinical practice, then clinicians must be involved in shaping how those systems are evaluated, deployed and monitored. 
  • Finally, medicine – and healthcare more broadly – must invest in new forms of multidisciplinary collaboration. The future of healthcare will involve teams that include clinicians, data scientists, engineers, ethicists and, increasingly, autonomous AI systems, working together. Building health systems that support this kind of collective intelligence will require new professional roles, new organisational structures, and new norms around shared expertise. 

Summary

AI is becoming embedded in clinical practice and influencing how clinical decisions are made. Expertise increasingly develops through shared systems of clinicians, data, guidelines, and AI tools. As this continues, clinicians play a key role in interpreting outputs, protecting patient values, and shaping how AI is governed, taught, and used in practice.

About this article