Interview
Bring your own device: How patients own tech is being used in clinical trials
As consumer tech is better able to monitor basic elements of health, patients are being asked to bring their own device to a clinical trial. By Abigail Beaney.
Credit: Shutterstock/Andrey_Popov
Keeping track of your pulse, heart rate and oxygen saturation has never been easier, with smartwatches able to easily track a large amount of health data. In addition, most people use a smartphone daily and users can track their steps and monitor their calorie intake.
Technology has advanced over the past decade, and patients can record, monitor and store a great deal of data about their health on their smart devices.
With concerns about the growing amount of e-waste, could utilising patients’ own devices alongside reusable medical-grade monitoring devices be the answer to reducing e-waste in remote monitoring of patients during clinical trials?
A report by GlobalData found that Remote Patient Monitoring (RPM) devices are expected to be worth $760m by 2030. GlobalData is the parent company of the Clinical Trials Arena.
Clinical Trials Arena sat down with Vivalink vice president Sam Liu to discuss how technology has advanced so that patients are able to use their own devices for remote monitoring in clinical trials.
Abigail Beaney: What are the biggest advantages of patients being able to use their own devices in clinical trials?
Sam Liu: The main thing is familiarity as patients are already using that device as part of their lifestyle. The number one challenge with remote patient monitoring in any setting is getting the patients to use the device correctly. When it’s your own device, you’re very familiar with it and you already have a daily routine based on it, so that’s probably the main advantage. There is also the benefit of it being a reusable device which reduces the need for extra components, or wasteful components.
I also think the mentality of using your own devices for healthcare has changed over time, whether it’s an app on a smartphone or a wearable, they are used commonly to track the user’s health. We are now seeing this in unwell patients, with a switch from conventional methods such as hospital monitoring before the pandemic to more telehealth based care. That infrastructure depends on access to devices and if consumers already have these, that’s even better.
Abigail Beaney: What are some of the disadvantages and challenges with patients using their own devices?
Sam Liu: There are several challenges with the bring-your-own-device model. First is that there are so many variations on devices – even with Apple devices, there are different versions of operating systems and different versions of hardware. If you’re looking at Android, it goes beyond that in terms of variation. This can become a problem with the technology because you need to ensure compatibility. Another challenge is that with patients using their own devices, there could be issues with security if unknown parties have the ability to access that personal health information.
Abigail Beaney: What sorts of devices are patients able to use?
Sam Liu: This depends on what the application is because there’s a difference between the actual medical wearables and a consumer device that may be used to supplement that. In some trials we are involved with, we provide medical wearables, such as an ECG. Patients can then use an app on a device like a smartphone that connects to the device and also collects simple patient survey data and patient-reported symptoms. In some circumstances, patients may be able to get enough data from their own wearables if they have them. It really depends on what the sponsor needs.
Abigail Beaney: How will this integrate with other systems that sites and sponsors have in place?
Sam Liu: That’s part of the implementation and planning by the company providing the technology. In our company for example, we work with the sponsors and sites to plan out the data workflow; how the data should be collected, where’s it going, and what it’s being used for. That understanding of the data is important and at that point, the processing and security can be considered. In our system, we try to avoid identifying patients along the data chain as much as possible, patients will be a unique identifier, a string of numbers and letters. When it gets to the site or sponsor, they can work out which patient it relates to using the identifier.
Abigail Beaney: How have sponsors reacted to the potential use of patient’s own devices in trials?
Sam Liu: The value proposition is different depending on the study but overall, I think what sponsors like is the fact that they can get access to a richer set of data that will improve their R&D process, whether it’s at the early phase stages or later phase stages. It is important for sponsors to be comfortable implementing something like this as part of their protocols and trials. We basically abstract the technology for the sponsor so from their perspective, they just get a data service from us while we worry about all the nuances and logistics of it working correctly.
One concern from sponsors is what they do with all the data because it is almost too much data. The sensors generate data, but that doesn’t mean it all needs to get processed. In the workflow design, you decide what type of data you’re trying to capture. I also think AI can help here with its ability to process larger amounts of data.
Abigail Beaney: Do you think that patients using their own devices will become the norm in clinical trials?
Sam Liu: The patient reported symptoms are growing in trials and healthcare but it is either sometimes not timely, or not accurate. By adding in sensors, you have quantified data which can correlate with patient-reported symptoms to support some of those reports. As a result, I think they will be used more and more in time.
The paper showcased attempts to make GPT-4 produce data that supported an unscientific conclusion – in this case, that penetrating keratoplasty had worse patient outcomes than deep anterior lamellar keratoplasty for sufferers of keratoconus, a condition that causes the cornea to thin which can impair vision. Once the desired values were given, the LLM dutifully compiled a database that to an untrained eye would appear perfectly plausible.
Taloni explained that, while the data would fall apart under statistical scrutiny, it didn’t even push the limits of what Chat-GPT can do. “We made a simple prompt […] The reality is that if someone was to create a fake data set, it is unlikely that they would use just one prompt. [If] they find an issue with the data set, they could fix it with consecutive prompts and that is a real problem.
“There is this sort of tug of war between those who will inevitably try to generate fake data and all of our defensive mechanisms, including statistical tests and possibly software trained by AI.”
The issue will only worsen as the technology becomes more widely adopted too. Indeed, a recent GlobalData survey found that while only 16.1% of respondents from its Hospital Management industry website reported that they were actively using the technology, a further 26.8% said either that they had plans to use it or were exploring its potential use.
Nature worked with two researchers, Jack Wilkinson and Zewen Lu, to examine the dataset using techniques that would commonly be used to screen for authenticity. They found a number of errors including a mismatch of names and sexes of ‘patients’ and lack of a link between pre- and post-operative vision capacity.
In light of this, Wilkinson, senior lecturer in Biostatistics at the University of Manchester, explained in an interview with Medical Device Network that he was less concerned by AI’s potential to increase fraud.
“I started asking people to generate datasets using GPT and having a look at them to see if they could pass my checks,” he said. “So far, every one I’ve looked at has been pretty poor. To be honest [they] would fall down under even modest scrutiny.”
He acknowledged fears like those raised by Dr. Taloni about future improvements in AI-generated datasets but ultimately noted that most data fraud is currently done by “low-skill fabricators,” and that “if those people don’t have that knowledge, they don’t know how to prompt Chat-GPT to have it either.”
The problem for Wilkinson is how widespread falsification already is, even without generative AI.
Caption: The US Pentagon is seeking to reduce carbon emissions through a range of programmes, but will it go far enough? Credit: US DoD
The mine’s concentrator can produce around 240,000 tonnes of ore, including around 26,500 tonnes of rare earth oxides.
Gavin John Lockyer, CEO of Arafura Resources
Total annual production
$345m: Lynas Rare Earth's planned investment into Mount Weld.
Caption. Credit:
Phillip Day. Credit: Scotgold Resources