×

Warning message

The installed version of the browser you are using is outdated and no longer supported by Konveio. Please upgrade your browser to the latest release.

CAN/DGSI 138, Patient-Visible and Accountable AI in Clinical Decision Support

Technical Committee Review

The document specifies requirements for the governance, transparency, interoperability, and lifecycle management of human‑in‑the‑loop AI Clinical Decision Support Systems (AI‑CDSS).

This document applies to AI‑CDSS that are:
Human‑in‑the‑Loop: Systems that provide recommendations, risk scores, or summaries to a qualified clinician who retains final decision‑making authority. This design choice is in direct alignment with the TBS ADM Directive’s requirements for high‑impact decisions (classified as Levels III and IV), which mandate that the final decision must be made by a human.

Influencing Documented Care: The AI’s output influences a decision that is recorded in the patient’s official health record. This scope criterion ensures the standard focuses on AI that has a material impact on the formal record of care.

Patient‑Data‑Driven: The AI’s logic operates on the personal health information (PHI) of the specific patient for whom the decision is being made.

The document does not apply to:
Fully autonomous systems that make and execute clinical decisions without human oversight.
AI systems used exclusively for non‑clinical administrative tasks (e.g., back‑office billing optimization) or for research purposes not involving direct patient care decisions.

DATE POSTED: April 08, 2026

DEADLINE FOR COMMENTS: May 08 , 2026

File name:

-

File size:

-

Title:

-

Author:

-

Subject:

-

Keywords:

-

Creation Date:

-

Modification Date:

-

Creator:

-

PDF Producer:

-

PDF Version:

-

Page Count:

-

Page Size:

-

Fast Web View:

-

Choose an option Alt text (alternative text) helps when people can’t see the image or when it doesn’t load.
Aim for 1-2 sentences that describe the subject, setting, or actions.
This is used for ornamental images, like borders or watermarks.
Preparing document for printing…
0%

Click anywhere in the document to add a comment. Select a bubble to view comments.

Document is loading Loading Glossary…
Powered by Konveio

Comments

View all Cancel

Add comment


in reply to Debra Turnbull's comment
referenced from 5.2.1
Technical
Canada does have citizens whose primary language is not English nor French. Is there a way of linking in other language modules? Indigenous language translation?
SNOMED CT is its own language - is there a way of accumulating/pooling the terminologies in say: Spanish, Mandarin, German?
Technical
What about Indigenous languages?
Technical
What does the "publishing" mechanism look like? Is the Assessment meant to be public-facing? or specifically patient-facing? or reserved only for physician-facing?
Editorial
Agreed, this standard can/should benefit clinicians, HCP organization leaders/administrators, industry, patients and the general public.
Editorial
strong advantage
Editorial
Should we add here a clarification that while the standard cannot itself create legal safe harbors, Is it out intent to design this standard in such as way that adherence to it, is evidence of due diligence recognizable by courts, Colleges, and insurers. Do we plan for engagement with medico‑legal stakeholders (CMPA, provincial Colleges, insurers) to co‑develop this standard before public consultation?
Editorial
Many tools blur the line between clinical and administrative functions (e.g., triage chatbots, bed management tools influencing clinical decisions, or note‑generation tools that shape diagnostic reasoning). Consider: Adding criteria or examples to distinguish excluded “administrative‑only AI” from tools that indirectly influence care decisions (e.g., prioritization algorithms, risk scores for resource allocation). Explicitly including tools that “substantially influence clinical prioritization, resource allocation, or patient routing,” even if not labelled as “clinical. Maybe define what counts as “influencing documented care decisions.”

Does this include AI‑generated suggestions that are visible in the EHR but not explicitly referenced in the note? Consider requiring that any AI artifact visible to a clinician in the context of clinical decision‑making is within scope, to avoid grey areas.
Editorial
Agreed that emphasis on clear roles and responsibilities in human‑AI teaming is aligned with thought leadership from WHO, OECD, and major academic bodies. Additionally agreed that the idea that AI‑CDSS governance should be an organizational responsibility (not just a vendor duty, which is as above is critical in HC, FDA etc lincensing requirements) mirrors FDA, EMA, and EU AI Act thinking on post‑market surveillance and quality management systems.
Editorial
I would offer the focus in this document on the "patient" being kept in the loop on the role of AI in their care is widely agreed upon, but this is only a portion of the larger picture. It strikes me that what is missing here is the appropriate concern HCP's own around the role of AI in summarizing volumes of material from an EHR, diagnostic test(s), assessing results, providing working interpretations, diagnosis, treatment plans and the like and the reliability of AI in so doing, medico-legal/liability risks, threat of harm to patients, lack of transparency in how the AI came to the conclusions it did and lack of audit trail to what information it gathered, used and interpreted and from where/what source, what guideline it used not only to offer and opinion and/or treatment strategy. I don't see this piece of the puzzle addressed thus far in my review. There are emerging standards and regulations on how these issues are to be addrressed, largely on the vendors of such technologies as noted above and guidance to HCP's on what and how to ensure vendors are are compliant with standards, without which compliance the technology would not be licensed for sale.
Editorial
yes, and as previous, there are a host of guidance and regulatory standards around how these documents are generated, many of them falling on the vendor to comply with before the product which generates these summaries is licensed for commercial sale.
Editorial
I am confused, Shadow AI is not used by HCP's or HCP organizations, there are plethora of guidance and regulatory documents focused on AI generated CDSS and CDS outputs, CPSO in Ontario has weighed in on how physicians are to use these tools. HC has weighed in as well. "DAX", "Nuance" commercially available products generate summaries that are compliant with the Canadian context are larger HCP organization, enterprise level software (EPIC, Meditech) and the like.
Editorial
HCP's are indeed hesitant to bring online a variety of AI tools which has driven the EU, HC, FDA to develop device regulations particularly for those that rely on AI technology to interpret findings/results, diagnose and/or propose treatments (which are less common and mostly left in the hands of humans)
Editorial
Ture, how does this relate to HIL in AI CDSS's? Whether there is or is not a HIL in AI generated consult notes, discharge summaries etc, patients may or may not continue to use unregulated tools, Google, ChatGPT, again not sure re the scope of this standard.
Editorial
The AI generated CDS is one of many applications to the use of AI by HCP's and HC organizations which functions as a summarization tool and data aggregator output highlighting key information and proposing recommendations, to HCP's rather than just presenting raw data.

I understanding is that AI CDS outputs/tools In 2026,are regulated through a combination of medical device regulations (depending whether the software guides diagnosis or treatment) The trend is shifting toward lifecycle oversight; meaning regulators now monitor these systems continuously rather than just at the time of initial approval. The FDA has updated its CDS software guidance, and includes a "time critical" exemption and provisions to limit 'black box" scenarios and increasingly like the EU combine and overlap Med Device regulations with AI regulations to drive transparency and Human in the loop HIL requirements depending on the device and its intended purpose
Editorial
Do we wish to address the distinction between Health Canada’s role (device licensing) and this standard’s role (sociotechnical implementation and use) and ensure the distinction is clearly articulated and aligns with international thinking (e.g., “regulate the product” vs. “govern the use”).
Editorial
If I understand correctly this proposed standard is designed to identify a gaps between high‑level AI governance principles (e.g., TBS ADM Directive, Health Canada MLMD guidance) and concrete clinical implementation requirements. The proposed standard is intended to focus on human‑in‑the‑loop AI‑CDSS, which aligns with current regulatory trajectories in Canada, the US, and EU (where “human oversight” is a central risk‑mitigation concept).
Intentionally builds on pan‑Canadian digital health infrastructure (FHIR, CACDI, SNOMED CT), which is essential for real‑world adoption.
Editorial
Yes, but not sure how we got to ChatGPT below. I think we need to be clear about our problem statement, If I understand correctly this proposed standard is to help provide guidance through the standard in assisting patients of HCP's to understand, the role AI played in their diagnosis, treatment plan and the like, and additionally provide guidance to HCP's and HCP organizations on operationalizing AI in CDSS systems. My understanding that the proposed standard is designed to focus on human‑in‑the‑loop AI‑CDSS, which aligns with current regulatory trajectories in Canada, the US, and EU (where “human oversight” is a central risk‑mitigation concept) and intentionally builds on pan‑Canadian digital health infrastructure (FHIR, CACDI, SNOMED CT), which is essential for real‑world adoption.
Editorial
Before AI, the population turned to "Google" to look up their symptoms, unclear what the point is here. "yes" true, is this proposed standard intended to dissuade the population from using "Google" or ChatGPT form looking up symptoms, the meaning of "results" they have available to them. Getting a tad confused what we men by AI-CDSS
Editorial
Agreed, and this is a problem, but unclear that this standard is intended to address the concerns spelled out here. Do we need to focus in the background and specific concerns re AI-CDSS and avoid confusing that background and list of concerns with LLM's used by the public for HC; a legitimate problem, but a different one.
Editorial
confused, why are we equating Chat GPT with an AI-CDSS. My understanding is that: Chatbots are generally designed for patient engagement, navigation, 24/7 symptom assessment, and triaging, connecting users to information. AI CDSS refers to more specialized tools integrated into clinical workflows (such as EHR systems) designed to directly support HCPs in making diagnoses, creating treatment plans, or providing evidence-based care. Do we need to tease out what actual AI-CDSS systems actually are, versus consumer grade AI systems often used by the public for information purposes, but are not typically AI-CDSS's
Editorial
Are we equating AI chatbots to AI-CDSS systems? they are related but in my experience not quite the same.
Editorial
The idea that AI‑CDSS governance should be an organizational responsibility (not just a vendor duty) mirrors as closely as I am aware FDA, EMA, and EU AI Act thinking on post‑market surveillance and quality management systems
Bilingual - only French and/or English. Here is the problem: what if the patient has a poor grasp of one or the other. An immigrant that has neither as their maternal tongue will continue to turn to ChatGPT and it's hallucinations. Why? - it's easier to understand.

I was in a virtual presentation with many other patient advisors. The conversation turned to ChatGPT. Many attested that they would never use it, and others would but would question the output. One woman announced that she "loved her ChatGPT" and defended its use. She spoke with an accent that was not of one of Canada's official languages. She was obviously dependent on the translation module (in her own language) that could provide her with the information in the form that she could understand.

Though language translation is not a consideration for this standard, perhaps the consideration for the potential of translation into other languages - needs to be present. This is an equity piece that could be built into the AI Decision Summary, and accessible to the patient. Perhaps a decision_thread output that could be produced with SNOMED-CT or ICD codings and made visible to the patient.