By KIM BELLARD
Let’s be honest. we are going to have artificial intelligence doctors.
Now, that prediction comes with a few caveats. It won’t happen this year, and maybe not even this decade. We might not call them “doctors,” but rather think of them as a whole new category. AI will almost certainly first follow its current path of becoming an assistive technology for human doctors and even patients. We will continue to struggle to fit them into existing regulatory boxes, such as clinical decision support software or medical devices, until those boxes are the wrong shape and size for how AI capabilities evolve.
But even with all that in mind, we’re going to end up with AI doctors. They will be able to listen to patients’ symptoms, evaluate the patient’s history and clinical findings, and determine the likely diagnosis and recommended treatment. With the help of their robots or other smart devices, they will even be able to perform many/most of these treatments.
We will wonder how we ever lived without them.
Many claim they are not ready for it. The Pew Research Center recently found that 60% of Americans would feel uncomfortable if their doctor even relied on AI for their care, and is more concerned that healthcare professionals will adopt AI technologies too quickly rather than too slowly. :
However, two-thirds of respondents already admit that they would like to see AI used in their skin cancer screening, and you have to believe that as more people understand the things that AI already helps with, even more things , in which it will soon help, the more open they will be.
People claim to value the doctor-patient relationship, but what we really want is to be healthy. AI will be able to help us with that.
For the sake of argument, let’s assume you buy my prediction and focus on the more difficult question of how we’re going to handle them. In other words, they are already taking licensing exams. We’re not going to “send” them to medical school, are we? They probably won’t need post-med school internships/residencies/fellowships like doctors either. And are we really going to have cloud-based, distributed AI licensed in every state where they can “see” patients?
There are some things we would definitely like them to show, eg.
- Good knowledge of anatomy and physiology, diseases and injuries;
- Ability to relate symptoms to possible diagnoses.
- Broad knowledge of evidence-based treatments for specific diagnoses;
- Effective patient interaction skills.
We’ll also want to be sure we understand any built-in biases/limitations of the data the AI was trained on. For example, did it include patients of all ages, genders, racial and ethnic backgrounds, and socioeconomic statuses? Are sources of information about conditions and treatments drawn from just a few medical institutions and/or journals, or from a wide range? How is it possible to judge solid research studies from more questionable ones?
Many would also argue that we need to remove any “black boxes” so that AI can clearly explain how it goes from inputs to recommendations.
Once we get past all those hurdles and AI is actually treating patients, we’ll want to keep control. Does it match the latest research? How many and what kind of patients does it treat? Most importantly, how are his patients?
I’m probably missing a few that others more knowledgeable about medical education/training/licensure can add, but these seem like a fair start. I would like my AI doctor to surpass all of those.
I just wish I was sure my doctors did too.
London taxi drivers have famously had to pass what has been called “the toughest test in the world” to get their licence, but this is one that anyone with a GPS can probably now pass, and that autonomous machines will soon be able to do that. We treat future doctors like future taxi drivers, except they don’t either.
According to the Association of American Medical Colleges (AAMC), the four-year medical school graduation rate is over 80%, and that attrition rate includes those who drop out for reasons other than poor grades (such as lifestyle, financial strain, etc.) : So we have to assume that many medical school students leave with Cs or even Ds during their coursework, which we probably won’t tolerate by AI.
Likewise, the textbooks they use, the patients they see, their training is quite limited. Training at Harvard Medical School is not the same as even, say, Johns Hopkins, let alone the University of Florida College of Medicine. An internship or residency at Cook County Hospital will not see the same conditions or patients as at Penn Medicine Princeton Medical Center. There are built-in limitations and biases in existing medical training that we, again, would not want with our AI training.
Regarding evidence-based recommendations, it is estimated that only 10% of medical treatments are currently based on high-quality evidence, and that it can take up to 17 years for new clinical research to actually reach clinical practice. Neither would be considered acceptable to the AI. Nor do we typically ask people’s doctors to explain their “black box” reasoning.
What the discussion about AI becoming a doctor reveals is not how difficult it will be, but rather how badly we’ve done it with humans.
Human doctors have continuous control, in theory. Yes, there are medical licensing boards in every state, and yes, there are continuing education requirements, but it takes a lot for the former to actually discipline bad doctors, and the requirements for the latter are much lower than doctors should stay. remote current. Additionally, there are few reporting requirements on how many/what types of patients individual physicians see, much less on outcomes. It’s hard to imagine that we would expect so little from AI doctors.
As I explained before, for many decades getting into an elevator without a human “expert” operating it on your behalf was impossible until technology made such operation as easy as pressing a button. We needed doctors as our elevator operators in a byzantine healthcare system, but we should look to use AI to simplify healthcare for us.
For all intents and purposes, the medical profession is essentially a guild; According to a fellow panelist on a recent podcast, medical societies are more concerned with how to keep nurses (or physician assistants or pharmacists) from encroaching on their turf than how to prepare for AI doctors.
Open that guild.
Kim is the former head of emarketing at Blues Masterplan, editor of the late and lamented Tincture.io, and now a regular contributor to THCB.