The Day the Algorithm Picked Up a Stethoscope
Why Canada Must Decide — Right Now — Whether Medicine Belongs to Humans or Machines
Canada has quietly crossed a line that would have been unthinkable a decade ago.
In this country, it is illegal to practise medicine without a license. Under the Regulated Health Professions Act, only trained, licensed professionals are allowed to perform “controlled acts”: diagnosing illnesses, prescribing treatments, delivering medical interventions that could harm a patient if done incorrectly.
The logic is simple. Medicine is dangerous in the wrong hands.
And yet today, millions of Canadians are asking a machine to do exactly that.
Every day, people turn to AI systems such as ChatGPT and Llama 3 for answers to questions that used to belong inside a clinic:
Why does my chest hurt?
Is this rash cancer?
Should I take this medication?
Let’s stop pretending these systems are just “providing information.”
They are diagnosing.
Canada’s Health System Is Driving People Into the Arms of Algorithms
Before anyone starts blaming the public for trusting AI, let’s talk about the elephant in the waiting room.
Canada’s health-care system is buckling.
Patients wait weeks for appointments. Months for specialists. Emergency rooms overflow. Family doctors retire faster than they are replaced. Millions of Canadians don’t even have a primary care physician.
When someone is sick at 2 a.m. and the system tells them to wait six weeks, they don’t wait.
They ask the internet.
And today the internet answers back with the calm voice of a simulated doctor.
That voice sounds authoritative. Empathetic. Confident.
Which is precisely the problem.
The Illusion of Intelligence
Large language models do not understand medicine.
They do not understand biology.
They do not understand physiology.
They do not understand consequences.
They predict text.
Statistically.
They assemble sentences based on patterns in training data. Sometimes those sentences are correct. Sometimes they are dangerously wrong.
And unlike a human doctor, the machine does not know the difference.
This isn’t speculation. It’s already happened.
In a widely reported medical case described in the Annals of Internal Medicine, a 60-year-old man asked an AI chatbot how to reduce sodium in his diet. The model suggested replacing table salt with sodium bromide.
That advice poisoned him.
The man spent three weeks in hospital with bromide toxicity — a condition so rare today that most physicians only read about it in textbooks.
The AI delivered the suggestion with total confidence.
Because confidence is what these systems are designed to produce.
The Disclaimers Are a Joke
Tech companies hide behind legal disclaimers.
“This system does not provide medical advice.”
“This tool is not intended for diagnosis.”
But Canadian law does not care about a disclaimer buried in fine print.
Under the Regulated Health Professions Act, a diagnosis occurs when it is reasonably foreseeable that someone will rely on it.
And guess what?
People rely on it.
According to the Canadian Medical Association, one in three Canadians has followed online health advice instead of professional advice.
Nearly one quarter report negative consequences.
The tech industry’s argument essentially boils down to this:
“We’re not responsible if people trust us.”
That might work in Silicon Valley.
It shouldn’t work in medicine.
The Persuasion Machine
The real danger isn’t that AI makes mistakes.
Humans make mistakes too.
The danger is persuasion.
AI is engineered to sound calm, caring, and certain. It mirrors the tone of a compassionate physician. It personalizes answers. It reassures frightened users.
In other words, it mimics the bedside manner of a doctor — without any of the accountability.
Research published in Nature found that AI systems downplayed the severity of medical emergencies in 52 percent of cases.
Imagine that happening in an emergency department.
Imagine a physician telling half their patients with urgent symptoms that everything is probably fine.
That physician would lose their license.
The algorithm loses nothing.
Silicon Valley Wants the Authority of Doctors Without the Responsibility
AI companies insist they are not practising medicine.
But their products behave like medical tools.
They answer health questions.
They suggest treatments.
They provide symptom analysis.
Some chatbots even advertise themselves as “diagnosis assistants.”
Meanwhile the companies behind them — including OpenAI — openly boast that hundreds of millions of users seek health advice from their systems every week.
That is not an experiment.
That is mass medical practice without regulation.
If a human did this without a license in Ontario, the consequences could include:
-
fines up to $50,000
-
jail time
-
criminal charges
But when an algorithm does it, regulators look the other way.
Why?
Because governments are terrified of slowing the AI investment boom.
The Legal Reckoning Is Coming
Courts are starting to catch up.
In one notable ruling, a tribunal found Air Canada liable for misinformation delivered by its AI chatbot.
The ruling was simple and devastating:
A company cannot avoid responsibility for what its AI says.
That precedent could eventually apply to medical advice as well.
When that happens, the legal floodgates will open.
A Hard Truth Nobody Wants to Say
AI can be useful in medicine.
But it cannot replace human judgment.
Medicine is not just data.
It is context, uncertainty, intuition, ethics, and responsibility.
It is a profession built on trust earned through years of training and oversight.
An algorithm has none of those things.
It has patterns.
And patterns are not the same as understanding.
The Analog Solution
Here is the controversial part.
When your health is on the line, you should not trust a system built from ones and zeros.
You should trust people.
People with experience.
People with training.
People who can be held accountable if they get it wrong.
Talk to a nurse.
Talk to a pharmacist.
Talk to a doctor.
Talk to a paramedic.
Talk to a human being.
Because if something goes wrong with an algorithm, you cannot sue a probability distribution.
Canada Has a Choice
The country can continue pretending AI health chatbots are harmless tools.
Or it can recognize the obvious truth: they are already practising medicine.
If that is the case, they should be regulated like any other medical practitioner.
Licensing.
Auditing.
Mandatory harm reporting.
Clear liability.
The same rules humans follow.
No exceptions for software.
The Bottom Line
AI might help transform medicine someday.
But right now, the hype has outrun reality.
Machines that guess words should not be diagnosing disease.
And a society that replaces doctors with algorithms is not modern.
It is reckless.
So until accountability exists, here is the simplest medical advice anyone can give:
Put the chatbot down.
Pick up the phone.
And talk to someone who actually knows what a pulse feels like.
yours truly,
Adaptation-Guide


