AI

Can regulators keep up with AI in healthcare?

 As more and more AI and data-driven healthcare products are brought to market, how can regulators keep pace with this rapidly evolving technology and ensure it always benefits patients? Abi Millar finds out.

A

rtificial intelligence (AI) is becoming a force to be reckoned with in healthcare. Over the last decade or so, AI-based healthcare products have moved out of the proof-of-concept stage and have begun to rewrite our understanding of what might be possible.


To cite just a few examples: deep learning techniques have been used in dermatology to diagnose skin cancer, and in radiology to make better sense of CT scans. Surgeons are using robots integrated with AI, while pharma companies are using convolutional neural networks to identify promising drug candidates.


AI-based wearable devices are routinely used to monitor patients, flagging up any changes to their vital signs. There are even AI-based triage tools for Covid-19, which can determine who needs a PCR test.


For the foreseeable future at least, the idea of a robot doctor seems far-flung. But it’s clear that these emerging digital technologies will soon be important tools in a physician’s armoury.

Challenges with regulating these technologies

What is less clear is how these new technologies might be used ethically and responsibly. A recent report from the World Health Organization (WHO) warned that AI technologies come with risks attached, not least biases encoded in algorithms; unethical data collection practices; and risks to patient safety and cybersecurity.


“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology, it can also be misused and cause harm,” said Dr Tedros Adhanom Ghebreyesus, WHO director-general.


The report advised that human autonomy should be protected, by keeping people in control of medical decision-making and ensuring that AI-based devices are used only under certain conditions.

The traditional paradigm of medical device regulation was not designed for adaptive AI/ML technologies.

It also made the case that machine-learning systems should be trained on data from a diverse population pool, reflecting the breadth of settings in which the device might be used.


Regulation is another point of contention. As of December 2020, 130 medical AI devices had been approved by the US FDA, according to a review in Nature. However, almost all these devices (126) were evaluated only retrospectively – and none of the 54 high-risk devices had undergone prospective studies.


The authors argued that more prospective studies were needed, in order to better capture true clinical outcomes. They also made the case for better post-market surveillance.


With ever more devices reaching the point of clearance, it will be incumbent on regulators to iron out how these devices are tested and approved. Currently, there are many questions hanging in the balance, not least how you regulate a machine-learning algorithm that’s designed to change over time in response to new inputs.


“The traditional paradigm of medical device regulation was not designed for adaptive AI/ML technologies, which have the potential to adapt and optimise device performance in real-time to continuously improve healthcare for patients,” noted a 2019 FDA discussion paper.

The FDA’s new approach

In January 2021, the FDA attempted to provide some clarity with the introduction of its first Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. Looking across the entire lifecycle of a device, this plan promotes transparency, real-world performance monitoring, and methodologies to assess algorithmic bias.


“Because of the rapid pace of innovation in the AI/ML medical device space, and the dramatic increase in AI/ML-related submissions to the agency, we have been working to develop a regulatory framework tailored to these technologies, which would provide a more scalable approach to their oversight,” says Bakul Patel, director of the FDA’s new Digital Health Center of Excellence.

We have been working to develop a regulatory framework tailored to these technologies [AI/ML].

At present, manufacturers monitor their device effectiveness through quality management systems (including aspects like complaint handling, customer feedback and management review). Every time there is a significant change to the device, they are required to gain additional clearance.


The FDA is looking into new approaches for AI-based devices, which take into account their iterative, autonomous nature. These include a ‘predetermined change control plan’, in which manufacturers are asked to specify how the algorithm is likely to adapt itself over time.


As well as removing the need for endless regulatory submissions, this approach has the potential to be safer. As part of their premarket submission process, manufacturers need to describe how they will control the expected modifications in a way that lowers the risk to patients.

Eliminating bias

On top of that, the FDA is trying to develop strategies to weed out algorithmic bias. Racial, ethnic and gender bias is a well-documented problem when it comes to the functioning of medical devices. Just think of pulse oximeters that don’t work so well in darker-skinned populations, or hip implants designed without considering female skeletal anatomy.


When these kinds of biases are baked into an AI, you are limiting its efficacy in real-world settings, as well as the extent to which the algorithm can learn and improve.


Between 2014 and 2017, the agency issued a number of guidance documents encouraging the collection and evaluation of data from diverse patient populations. These will prove particularly applicable for manufacturers working on AI devices.

Clinical trial sponsors should develop a strategy to enrol diverse populations.

“These documents provide recommendations to improve the quality, consistency, and transparency of data regarding the performance of medical devices within specific sex, age, racial, and ethnic groups,” says Patel.


“Clinical trial sponsors should develop a strategy to enrol diverse populations including representative proportions of relevant age, racial and ethnic subgroups, which are consistent with the intended use population of the device.”

Uncharted territory

AI in healthcare has scope to be a huge area, and expectations across many modalities are sky-high. That being the case, it is reassuring to note that AI ethics is also a burgeoning area of interest.


The WHO notes that around 100 proposals for AI principles have been published in the last decade. It adds that while ‘no specific principles for use of AI for health have yet been proposed for adoption worldwide,’ many regulatory authorities are preparing their own frameworks.

Around 100 proposals for AI principles have been published in the last decade.

The FDA, for one, is boosting its capabilities in this area. “We recognise the importance of continuing to develop the capability of our workforce in the area of AI/ML and other emerging technologies, and we are continuing to hire and retain world-class talent in these areas,” says Patel.


Although AI is still, to some extent, uncharted territory, with its pitfalls and limitations yet to become fully apparent, regulators are eyeing the road ahead with cautious optimism. As the thinking goes – put the right controls in place, and patients will reap the benefits without being exposed to unnecessary risks.


“Our vision is that with appropriately tailored regulatory oversight, AI/ML-based SaMD will deliver safe and effective software functionality that improves the quality of care that patients receive,” says Patel.