Market insight in association with

AI in healthcare: Should ‘medical machines’ be allowed to teach themselves?

The rise of AI in healthcare brings with it a whole new set of questions when it comes to the regulatory regime around medical devices.

Before a medical device is sold, it has to go through a whole series of approvals and regulations. Does it resemble a pre-existing device? Is this the first of its kind? What kind of risk does its misuse/misapplication put its patient at?


All of these questions help decide which approval pathway the device will go through, and because of that, how long it will take the device to be approved and appear on the market. Artificial intelligence/machine learning (AI/ML) medical devices are no different.

In recent years, AI/ML has been used in several medical devices/procedures.


Recent developments include the ability to diagnose diabetic retinopathy, strokes, and colorectal cancer, approaching and in some cases surpassing the accuracy of practicing physicians to do so.


These ‘medical machines’ will be faster, more accurate, cheaper, and able to work through a much larger workload than their human counterparts. These diagnostic powerhouses will free up clinicians from a lot of their day-to-day diagnoses to do their most important work: saving lives.

However, we’re used to thinking of medical devices as static things. When you buy a needle, you expect it to be like the last needle you bought, and the one before that. If you wanted to make a better needle, you had to first design one and get it approved.


AI is the first tool of its kind that not only can improve, but can teach itself to be better. So what do you do when your stethoscope ‘evolves’? Is it still safe to use? Is it really more effective?


In the past, medical AI devices have been sold in a ‘locked’ state. That is, they are now incapable of learning from any further data and can only diagnose based on their training set, making them act in a uniform manner. The benefit of this is that once you buy a device, you can be assured of its quality and efficacy. If the device manufacturer wants to release an improvement on the algorithm, they have to first submit a premarket approval and submit their new algorithm to a battery of tests.

This ignores one of AI’s largest benefits: its capacity for self-improvement.


If the AI needs to undergo a new months-long series of tests every time it updates itself, it’s possible that it will be unable to diagnose certain cases. A patient might slip through the cracks who was otherwise diagnosable if the AI had been better. So should the regulations surrounding self-improving AI be relaxed?


This is one of the questions the FDA is currently wrestling with.

On the one hand, lives could be saved by constantly improving diagnostic tools. On the other hand, if AI is allowed to develop constantly without proper oversight, it may begin to make errors—falsely diagnosing some patients while failing to diagnose others—and do more damage as a result.


However, the risks may outweigh the benefits in this case. Just because the devices will be able to self-evolve does not mean that they should. It is important that a person or group of people is there to validate each gradual improvement and verify that every iteration is in fact an improvement over the previous incarnation.

For more insight and data, visit the GlobalData Report Store.