10 minute read 1 Mar. 2024

What health care organizations do to seize the potential of generative AI could unleash possibilities that transform the sector.

Health Advsior using AI

Six ways to make more of AI in Canadian health care

Authors
Dai Mukherjee

EY Partner, Digital Health Consulting

Helping transform a dynamic health care system to meet the needs of all Canadians. Passionate about reading non-fiction, road trips with the family and cricket.

Safia Rahemtulla

Risk Consulting, Provincial Government & Public Sector Risk Leader

Trusted advisor in supporting governments and public entities in navigating risk, enabling innovation and large-scale transformation. Globetrotting adventurer with a passion for technology.

Zaki Hakim

EY Senior Manager, Digital Health Consulting

Passionate about transforming health care using digital, data and intelligence technologies. Relationship builder. Thought leader and university lecturer. Lego and world history nerd.

10 minute read 1 Mar. 2024

What health care organizations do to seize the potential of generative AI could unleash possibilities that transform the sector.

In brief 

  • It’s time to see how AI can be deployed by the sector to make an impact on health outcomes for years to come.
  • EY’s team recommends six key areas to consider when developing and deploying AI ethically.

Surging enthusiasm over the use of artificial intelligence (AI) — due largely to advancements and public access to prominent generative AI (gen AI) solutions — represents wide-ranging opportunity for Canadian health care.

We’ve seen excitement around the potential use of AI in health care in the past. But an evolving confluence of factors means this may be the moment that ultimately sparks increased, meaningful and sustainable adoption and integration of AI solutions across the sector.

What Canadian health care organizations do to seize the potential of this technology could unleash possibilities that define the sector and improve health outcomes for years to come.

Why is this AI’s big health care moment?

In Canada, the volume of available health data is exploding. Growing adoption of electronic health records and electronic medical records, as well as the proliferation of smart devices, are becoming the lifeblood of AI solutions. The International Data Corporation (IDC) estimates that by 2025, the compound annual growth rate for health care data will be 36%.¹

This rapid and large increase in the availability of health care data is enriching our ability to train complex AI models and generate new insights and solutions.

This is compounded by the fact that data storage and integration technology has become increasingly sophisticated. Historically, the Canadian health care sector and AI developers have faced challenges in unifying and integrating data across disparate sources to drive machine learning (ML) solutions, including electronic health records, picture archiving and communication systems, lab solutions, medical devices and more.

Not anymore.

Data integration solutions now allow us to reduce data silos and stitch patient data together to develop novel AI solutions. For example, data fabric uses metadata — that is, data about the data — to unify, harmonize and govern structured and unstructured health care data into a common architecture.

Put those developments into broader context, and opportunities to extend AI’s influence across the sector only grows. A proliferation of accessible solutions for citizen AI means low- and no-code solutions are now available, bringing the power of AI within reach for users at all skill levels, both technical and non-technical. This shift is also enabling the permeance of AI across health care organizations, from researchers to executives, and furthering the shift to preventative health.

In the post-COVID era, when Canadians are eager to adopt AI into society, all of this bodes well for leverging AI in health care processes. Research shows 38% of Canadians say they’ve personally been in contact with or used an AI application. What’s more, some 36% of Canadians have used ChatGPT.² Meanwhile, researchers have already found that ChatGPT’s responses to patient questions posted to a social media forum were rated significantly higher quality and more empathetic than real physicians.³ However, organizations should consider the limitations and risks of use of such technology, for example, some of the GPT systems’ knowledge base is at least a year old, and so more recent information on treatment plans may not be available.

This increasing openness to AI is good news for advancing solutions in Canadian health care. The aggregation of enhanced data storage and integration technology, coupled with advances in AI and Canadians’ willingness to adopt AI means that the Canadian health care sector can deploy AI in a profound way to impact health outcomes, as well as the patients and practitioners who bring the system to life.

How can AI positively impact Canadian health care?

Across the sector, health care organizations are already integrating AI solutions into clinical processes.

Unity Health Toronto is running more than 30 AI models daily, including its highly successful CHARTWatch algorithm, which identifies patients at high risk for clinical deterioration — such as patient death or ICU admission — and flags them for medical team intervention. The final evaluation of this tool has identified a 20% reduction in mortality on the medical ward where the model’s been deployed, leading to a significant drop in mortality among patients.

Also in Toronto, the University Health Network has launched an AI Hub. This collaborative centre was designed to augment human intelligence by continuously advancing AI technologies and accelerating implementation to deliver better patient outcomes and enhance clinical workflows. Since September 2023, the Hub has generated innovative solutions like Surgical Go-No-Go, which uses computer vision to provide surgeons with real-time guidance and navigation during operations. This solution helps avoid complications and optimize surgical outcomes.

In another example, newly developed Medly, a digital therapeutic platform for heart failure management, is now reducing re-hospitalizations by 50%. The algorithm has been enhanced by AI to improve its ability to detect decompensation more effectively in patients at home, allowing the hospital to intervene sooner.

A partnership between the Vector Institute for Artificial Intelligence and Unity Health led to the development of GEMINI — a rich, centralized resource with anonymized patient data from over 30 Ontario hospitals, that has been standardized and optimized for AI and ML discovery. Vector researchers are using this data to conduct innovative research studies, including the development of cutting-edge, privacy-enhancing technology. The Institute has also partnered with Kids Help Phone, using natural language processing (NLP) to adapt to the way young people speak, empowering frontline staff to offer more precise services and resources based on words, phrases and speech patterns.

Similarly exciting examples are emerging at Toronto’s Hospital for Sick Children (SickKids). The hospital has established the AI in Medicine for Kids (AIM) program to build targeted AI solutions that improve outcomes and delivery of care for children.

Collectively, these examples — and more like them taking shape nationwide — open the gateway to ever more possibilities. That said, health care providers, agencies and ministries looking to embark on the AI journey have a lot to consider.

Where should Canadian health care organizations focus to safely deploy AI solutions at scale?

Canada’s health care organizations should kickstart AI deployment by concentrating on six key areas. This can help establish a strong data foundation — a precursor to developing and deploying AI solutions — and then help create a broad AI strategy that connects use cases and development with an operating model to scale AI across the organization.

1.    Keep regulation and ethical AI development at the forefront

Health care organizations are no strangers to regulation for patient information and data protection, such as PHIPA. To make the most of AI, organizations must also establish a proactive approach to generating trust in AI. While AI can enhance clinical efficiency and outcomes, it may also come with reduced accuracy of predictions for minority populations and enable unequal access to care.⁴

Organizations should proactively adopt and implement practices to engender trust in AI from providers and patients. Consult credible sources like The Vector Institute’s Health AI Implementation Toolkit or the Government of Canada’s guidelines to understand and prioritize ethical AI.

While it is important to continue to stay compliant with applicable regulations, as the regulatory environment continues to evolve, Canadian healthcare organizations may also look to international standards and regulations to build their ethical AI development practices such as ISO 42001 AI Management System Standard or the NIST AI Risk Management Framework.

Organizations across the globe are developing their own responsible AI framework that enable their thinking around the responsible development, deployment and use of AI. For example, EY has its own Responsible AI Framework, which is grounded in the key pillars of purposeful design, agile governance and vigilant supervision over the processes and risks throughout the lifecycle of AI development and use.

EY’s Responsible AI Framework is enabled through its nine principles:

  • Accountability
  • Explainability
  • Reliability
  • Fairness
  • Transparency
  • Security
  • Compliance
  • Sustainability
  • Data protection
GPS graphics 1

Health care organizations should similarly identify the framework that works for them and the underpinning principles that enable the responsible development, deployment and use of AI in their organizations.

2.    Build and enhance trusted data foundations

Fostering trust in AI programs begins by generating trust in the way you govern data. That means prioritizing informed consent and community engagement. It is essential to engage with minority and Indigenous communities from the outset, involving these populations in decision-making processes regarding the use of their health data.

Community stewardship of data, individual data ownership and access to data are key principles across a group of frameworks that collectively inform health data justice norms.⁵

 Consider the First Nations of Canada Principles of ownership, control, access, and possession (OCAP), the Engagement, Governance, Access, and Protection (EGAP) Framework, and the EY Responsible AI Framework to dig deeper into these priorities.

3.    Connect data sets and develop data platform architecture

Do not build AI solutions in isolation of the organizational data infrastructure and data management context. To succeed, embark on strategies to connect supporting data integration and storage capabilities to effectively design, deploy and scale AI solutions. This can include the previously discussed data fabric technology to stitch together siloed clinical datasets, as well as tooling for data discovery, data lineage management tracking, secure data sharing authentication and data anonymization solutions.

By first focusing on proactive management of data and enabling technology to connect and use the data, you will lay the groundwork for supporting rapid scaling of AI models across departments.

Further, Canadian healthcare organizations should also focus on the provenance and ownership of the data ideally carving out eligible datasets that can be used for developing, training and testing AI, versus data that cannot be used (i.e., data where the data subject’s consent has not been obtained, data with embedded historical biases, etc.).

4.    Define your AI strategy clearly

Going forward, evolving in health care will involve employing AI-driven insights to connect care and use health services data along with social determinants of health and patient-generated health data sets to support high-quality care.

As organizations grapple with the challenges associated in realizing this vision, it’s important to ask key questions. For example:

  • What level of investment will the organization make to deploy and test AI solutions, given the pace of technological advancement may render sizable investments obsolete, especially in prototype development?
  • Is the solution scalable based on representative data that is generalizable across different disease types, populations or variable care settings?
  • How will the organization select from a veritable sea of ML and gen AI across different modalities, with new ones released each day and decide which technology providers to partner with?

Here, EY teams use the EY.ai Maturity Model to help organizations visualize target AI maturity across seven dimensions, evaluating the current level of AI adoption and equipping leaders with a clear understanding of their current capabilities versus their target future state. This fosters strategic planning to bridge AI gaps and development of an efficient AI roadmap.

GPS graphics 2

The extent to which an organization plans to develop and use AI has significant implications on the rigor of its risk management and governance over the technology. In short, to have a scalable and sustainable AI program, an organization will need to ensure that sufficient resources are available to enable its responsible use.

A health care organization that is already facing severe constraints over resources will require a responsible AI strategy that’s aligned with its broader digital and business strategies. Further, in the absence of a sufficiently mature responsible AI program, organizations may run the risk of propagating historical biases, further exacerbating health inequities facing minority populations.

5.    Identify and select AI use cases

Embarking on a successful AI journey requires health care organizations to carefully choose which areas to focus on. That could include clinical or administrative areas, including back-office functions. Proactively prioritizing investments in this way enables a balanced portfolio of use cases that’s directly aligned to patient, provider, health administrator, researcher and overall public and health system needs.

What could that look like? Potential use cases might include: 

GPS graphics 3

6.    Define your data culture and operating model

Transformation through use of AI requires a strong data culture and a robust organizational structure capable of responding to priorities. Making the right decisions on these fronts is critical to successful AI deployment — but these choices raise questions.

For example:

  • Should organizations centralize AI teams for consistent governance or create a hub-and-spoke model based on sharing responsibility across business units?
  • Is it better to create in-house training to build AI skillsets, explore managed service models for hard-to-find capabilities or work on a hybrid model?
  • How is accountability for AI decision-making distributed within the organization, and who takes responsibility for ethical considerations and potential biases in AI system outputs?
  • When working with third parties, is it appropriate to share the sensitive data with the third-party to enhance the efficiency and accuracy of the model?

The answers are deeply rooted in an organization’s AI strategy and values. Navigating these challenges will involve standing up a new way of operating by always keeping staff data literacy and a culture of data-driven decision-making at the forefront of the organization. 

  • Show references#Hide references

    [1] Coughlin, S., Roberts, D., O’Neill, K., & Brooks, P., “Looking to tomorrow’s healthcare today: a participatory health perspective,” Internal Medicine Journal, 48(1), 92-96, 2018.

    [2]“Despite the prevalence of artificial intelligence (AI), most Canadians are largely unaware of its infiltration in their daily lives,” Narrative Research, May 15, 2023, https://narrativeresearch.ca/despite-the-prevalence-of-artificial-intelligence-ai-most-canadians-are-largely-unaware-of-its-infiltration-in-their-daily-lives/

    [3] Ayers, J.W., Poliak, A., Dredze, M., Leas, E.C., Zhu, Z., Kelley, J.B., Smith, D.M. et al, “Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum,” JAMA internal medicine., 2023

    [4] Panch, T., Mattie, H., & Celi, L.A., “The ‘inconvenient truth’ about AI in healthcare,” NPJ Digital Medicine, 2(1), 77, August 16, 2019.

    [5] Shaw, J., & Sekalala, S., “Health data justice: building new norms for health data governance,” NPJ Digital Medicine, 6(1), 30, February 28, 2023.

Summary

Make no mistake: AI can and will transform the health care sector. As we navigate the complex landscape of predicting disease and personalizing treatment, AI will become increasingly crucial. Advancements in ML, NLP, computer vision, gen AI and supporting data platforms are driving a paradigm shift in the delivery of patient care, health care resource allocation and clinical research. This represents potential opportunities and new challenges tied to data privacy, algorithmic bias and regulatory standards. Developing and deploying AI responsibly across the health care sector should begin by keeping six core principles in mind every step of the way. 

About this article

Authors
Dai Mukherjee

EY Partner, Digital Health Consulting

Helping transform a dynamic health care system to meet the needs of all Canadians. Passionate about reading non-fiction, road trips with the family and cricket.

Safia Rahemtulla

Risk Consulting, Provincial Government & Public Sector Risk Leader

Trusted advisor in supporting governments and public entities in navigating risk, enabling innovation and large-scale transformation. Globetrotting adventurer with a passion for technology.

Zaki Hakim

EY Senior Manager, Digital Health Consulting

Passionate about transforming health care using digital, data and intelligence technologies. Relationship builder. Thought leader and university lecturer. Lego and world history nerd.